McI – Mann Face Off : MBH98/99

I’ve been reading up on the early days of the whole shebang, starting with the early correspondence between McI and Mann.

Here’s a CA post that refers to correspondence between McI and Mann re data.

Mann told Antonio Regalado of the Wall Street Journal that he would not be “intimidated” into releasing source code for MBH98.

Here is an account of correspondence with Mann and with the U.S. National Science Foundation:

Even before publication of MM03, we politely requested clarification on issues in MBH98. This was a source of controversy in late 2003. Here is a record of correspondence with Mann which we made available some time ago.

After publication of MM03, Mann argued that MM03 contained an incorrect implementation of a stepwise principal components procedure (which was not documented in MBH98) . Details of this procedure have continued to drift in, with the first listing of the number of PC series retained in each calculation step/tree ring network combination provided in the July 2004 Corrigendum SI. This listing was inconsistent with prior information.

In August 2004, through Nature, we became aware privately of claims that a variation of Preisendorfer’s Rule N had been used to determine the number of retained PC series. This claim was published in November 2004. (We have not been able to verify actual application of this criterion, as actual numbers are impossible to replicate. See Was Preisendorfer’s Rule N Used?

In any event, immediately after we learned of the previously undocumented stepwise procedure, we asked to inspect MBH98 source code so that we can completely reconcile results and avoid this type of dispute. Attached is our correspondence after MM03, which obviously cannot be construed as a form of “intimidation”, but as an entirely proper request.

Subsequently, we located some Fortran code at Mann’s FTP site for the calculation of tree ring principal components. Although this code is only a very small fraction of the total code, it contains a procedure which was materially misrepresented in MBH98 and which additionally is not statistically valid. We reported on this in MM05 (GRL) and MM05(EE).

As I’ve pointed out in various postings on Replication, it is impossible on the present record to replicate important steps in other parts of MBH98.

After our unsuccessful attempts at obtaining source code, we asked the U.S. National Science Foundation for assistance. This was also unsuccessful. The correspondence is here.

We also made attempts with Nature and I’ll get to describing this on another day.

Since the links are to the now-defunct website, I used the wayback machine to locate this page which has a record of McIntyre’s correspondence with Mann etc.

Here’s the first email, dated April 8, 2003:

Dear Dr. Mann,

I have been studying MBH98 and 99. I located datasets for the 13 series used in 99 at ftp://eclogite.geo.umass.edu/pub/mann/ONLINE-PREPRINTS/Millennium/DATA/PROXIES/ (the convenience of the ftp: location being excellent) and was intereseted in locating similar information on the 112 proxies referred to in MBH98, as well as listing (the listing at http://www.ngdc.noaa.gov/paleo/ei/data_supp.html is for 390 datasets, and I gather/presume that many of these listed datasets have been condensed into PCs, as mentioned in the paper itself.

Thank you for your attention.

Yours truly,

Stephen McIntyre, Toronto, Canada

There is some back and forth between Rutherford and McI that you can see at the link.

Here is another interesting email from McI to Mann, dated September 9, 2003:

Dear Prof. Mann,

I have tried diligently to reconstuct your termperature principal components as described in MBH98, but without success and would appreciate some assistance.

I downloaded hadcrut2.dat from CRU (July 2003 edition), truncated the data to 1902-1995 and further truncated it to the 1082 cells at gridpoints.loc and arranged as 1082 time-series with 1128 monthly readings. This step was successful as I could match your map of cell locations. I standardized each series to mean 0 and sd 1 for the period 1902-95.  In MBH98, you say that you carried out “conventional” PCA, but there is so much missing data that conventional PCA failed when I tried. In particular, 4 cells had no values at all and I don’t see why they were included in your selection.  Most PCA algorithms balk at missing data or exclude it. How did you deal with the extensive missing data?

I downloaded the EOFs, PCs and eigenvector loadings from ftp://eclogite.geo.umass.edu/pub/mann/MANNETAL98/EIGENVECTORS/ .  I spliced the EOFs into a 16×1082 matrix and the PCs (pc01.out, etc.) into a 92×16 matrix. I made a diagonal of the first 16 values in column 2 of “tpca-eigenvals.out”, which look like eigenvalues, and carried out an expansion.  I then deducted the grid-box values generated from this expansion from the Jones data as above; calculated variance for each year across available cells and made a sum, comparing this to the variance similarly calculated in the standardized Jones data.   I obtained very low/much lower explained variance from this than you got.  I also tried some experiments and it also doesn’t seem to me that the first 16 EOFs maximize explained variance, as they should.  I would appreciate any assistance or clarification which you could give.

Regards, Steve McIntyre

He received no reply on this and so sent the following, dated September 25, 2003:

Dear Prof Mann

Here is the pcproxy.txt file sent to me last April by Scott Rutherford at your direction.  It contains some missing data after 1971. Your 1998 paper does not describe how missing data in this period is treated and I wanted to verify that it is the correct file. How did you handle missing data in this period? In earlier periods, it looks like you changed the roster of proxies in each of the periods described in the Supplementary Information using only proxies available throughout the entire period. I have obtained quite close replication of the rpc1 in the 20th century by calculating coefficients for the proxies and then calculating the rpc’s using the minimization procedures described in MBH98 and the selection of PCs in the Supplementary Information.  The reconstruction is less close in earlier periods.  I also don’t understand the reasoning for reducing the roster of eigenvectors in earlier periods.  The description in MBH98 was necessarily very terse and is still very terse in the Supplementary Information; is there any more detailed description of the reconstruction methodology to help me resolve this? Thank you for your attention.

Yours truly,
Steve McIntyre,
Toronto, Canada

The response, on the same day:

Dear Mr. McIntyre,

A few of the series terminate prior to the nominal 1980 termination date of the calibration period (the earliest such instance, as you note, is 1971). In such cases, the data were continued to the 1980 boundary by persistence of the final available value. These details in fact, were provided in the supplementary information that accompanied the Nature article. That information is available here (see first paragraph):
ftp://eclogite.geo.umass.edu/pub/mann/ONLINE-PREPRINTS/MultiProxy/data-supp.html
and here:
http://www.ngdc.noaa.gov/paleo/ei/data_supp.html

The results, incidentally, are insensitive to this step; essentially the same reconstruction is achieved if a calibration period terminating in 1970 (prior to the termination of any of the proxy series) was used instead.

Owing to numerous demands on my time, I will not be able to respond to further inquiries.
Other researchers have successfully implemented our methodology based on the information provided in our articles  [see e.g. Zorita, E., F. Gonzalez-Rouco, and S. Legutke, Testing the Mann et al. (1998) approach to paleoclimate reconstructions in the context of a 1000-yr control simulation with the ECHO-G Coupled Climate Model, J. Climate, 16, 1378-1390, 2003.]. I trust, therefore, that you will find (as in this case) that all necessary details are provided in the papers we have published or the supplementary information links provided by those papers.

Best of luck with your work.

Sincerely,
Michael E. Mann

So according to Mann, Zorita et al. successfully reconstructed the MBH methodology.

Here’s the abstract:

ABSTRACT

Statistical reconstructions of past climate variability based on climate indicators face several uncertainties:  for instance, to what extent is the network of available proxy indicators dense enough for a meaningful estimation of past global temperatures?; can statistical models, calibrated with data at interannual timescales be used to estimate the low-frequency variability of the past climate?; and what is the influence of the limited spatial coverage of the instrumental records used to calibrate the statistical models? Possible answers to these questions are searched by applying the statistical method of Mann et al. to a long control climate simulation as a climate surrogate. The role of the proxy indicators is played by the temperature simulated by the model at selected grid points.

It is found that generally a set of a few tens of climate indicators is enough to provide a meaningful estimation (resolved variance of about 30%) of the simulated global annual temperature at annual timescales. The reconstructions based on around 10 indicators are barely able to resolve 10% of the temperature variance. The skill of the regression model increases at lower frequencies, so that at timescales longer than 20 yr the explained variance may reach 65%. However, the reconstructions tend to underestimate some periods of global cooling that are associated with temperatures anomalies off the Antarctic coast and south of Greenland lasting for about 20 yr. Also, it is found that in one 100-yr period, the low-frequency behavior of the global temperature evolution is not well reproduced, the error being probably related to tropical dynamics.

This analysis could be influenced by the lack of a realistic variability of external forcing in the simulation and also by the quality of simulated key variability modes, such as ENSO. Both factors can affect the large-scale coherence of the temperature field and, therefore, the skill of the statistical models.

Perhaps people can help me out here:

McI makes a request to Mann for data so he can do his ‘study’ on MBH98/99. Mann refers McI to his associate, Rutherford, who can’t locate the ftp site with the data and so sends McI an excel file. As a result, McI cannot replicate the results and so McI asks for more assistance and help working out the methodology. Mann points him to Zorita et al. who he argues successfully reproduced the methodology in their climate modeling.

It is only after M&M03 that Mann responds that the Excel data was erroneous. Thus the M&M03 findings were wrong because they used the wrong data.

Here’s the original paper: Corrections to the Mann et al. (1998) Proxy Database and Northern Hemispheric Average Temperature Series

Here’s the abstract:

Abstract:
The data set of proxies of past climate used in Mann, Bradley and Hughes (1998, “MBH98” hereafter) for the estimation of temperatures from 1400 to 1980 contains collation errors, unjustifiable truncation or extrapolation of source data, obsolete data, geographical location errors, incorrect calculation of principal components and other quality control defects. We detail these errors and defects. We then apply MBH98 methodology to the construction of a Northern Hemisphere average temperature index for the 1400-1980 period, using corrected and updated source data. The major finding is that the values in the early 15th century exceed any values in the 20th century. The particular “hockey stick” shape derived in the MBH98 proxy construction – a temperature index that decreases slightly between the early 15th century and early 20th century and then increases dramatically up to 1980 — is primarily an artefact of poor data handling, obsolete data and incorrect calculation of principal components.

Here’s the second E&E paper:  The M&M Critique of the MBH98 Northern Hemisphere Climate Index: Update and Implications:

Here’s the abstract:

ABSTRACT

The differences between the results of McIntyre and McKitrick[2003] and Mann et al. [1998] can be reconciled  by only two series: the  Gaspé cedar ring width series and the first principal component (PC1) from the North American tree ring network. We show that in each case MBH98 methodology differed from what was stated in print and the differences resulted in lower early 15th century index values.

In the case of the North American PC1, MBH98 modified  the PC algorithm so that the calculation was no longer centered, but claimed that the calculation was “conventional”. The modification caused  the PC1 to be dominated by a subset of bristlecone pine ring width series which are widely doubted to be reliable temperature proxies. In the case of the Gaspé cedars, MBH98 did not use archived data, but made an extrapolation, unique within the corpus of over 350 series,  and misrepresented the start date of the series. The recent Corrigendum by Mann et aldenied that these differences between the stated methods and actual methods have any effect, a claim we show is false. We also refute the various arguments by Mann et al. purporting to salvage their reconstruction, including their claims of robustness and statistical skill. Finally, we comment on several policy issues arising from this controversy: the lack of consistent requirements for disclosure of data and methods in paleoclimate journals, and the need to recognize the limitations of journal peer review as a quality control standard when scientific studies are used for public policy.

Them’s fightin words.

Here’s the Corrigendum:

I’ll post some commentary from other observers later.

So, the outstanding issues, according to M&M include the correct data used for MBH98/99 — what is the correct data?  There is a list of questions here:

Here they are:

QUESTIONS FOR PROFESSORS MANN, BRADLEY AND HUGHES THAT ARISE FROM THIS ANALYSIS.

These questions summarize the results of our audit of the data set. Answers to these questions are required to settle the contradiction between the original and corrected results.

1.       Does the database contain truncations of series 10, 11 and 100? (and of the version of series 65 used by MBH98)?

2.       Are the 1980 values of series #73 through #80 identical to 7 decimal places? Similarly for the 1980 values of series #81-83?  And for the 1980 values of series #84 and #90-92? What is the reason for this?

3.       Where are the calculations of principal components for series in the range #73-92 that would show that these have been collated into the correct year? Do you have any working papers that show these, and if so, would you make them FTP or otherwise publicly available?

4.       Do the following series contain “fills”: #3, #6, #45, #46, #50-#52, #54-#56, #58, #93-#99?

5.       How did you deal with missing closing data in the following series: #11, #102, #103, #104, #106 and #112?

6.       What is the source for your data for series #37 (precipitation in grid-box 42.5N, 72.5W)?  Did you use the data from Jones-Bradley Paris, France and if so, in which series?  More generally, please provide, identifications of the exact Jones-Bradley locations for each of the series #21-42. Where are the original source data?

7.       Did you use summer (JJA) data for series #10 and #11 rather than annual data. If so, why?

8.       Does your dataset contain obsolete data for the following series: #1, #2, #3, #6, #7, #8, #9, #21, #23, #27, #28, #30, #35, #37, #43, #51, #52, #54, #55, #56, #58, #65, #105 and #112?

9.       Do you use the following listed proxies: fran003, ital015, ital015x, spai026 and spai047?  If so, where?

10.   Did you commence your calculation of principal components after the period in which all dataset members were available for the following series: #69-71, #91-92, #93-95, #96-99?

11.   What is the basis for inclusion of some tree ring sites within a region in regional principal component calculations and others as individual dataset components?

12.   Did you commence your calculation of principal components before the period in which all dataset members were available for the following series: #72-80, #84-90? If so, please describe your methodology for carrying out these calculations in the presence of missing data and your justification for doing so?

13.   What is the explained variance under your principal component calculation for the period of availability of all members of your selected dataset?  Would you please make  your working papers that show this FTP or otherwise publicly available?

I encourage readers who are in the know to comment or provide links to analysis of these questions. I’ll try to post response from MBH when I have some time.

From my limited read of the literature surrounding all this, it seems to me that the HS controversy is limited to a few main issues:

1. PC Analysis — short centered vs. conventional PC analysis

2. Data used — which data set was used, and how were missing data filled in.

3. Use of certain proxies, such as BCPs and others.

About Policy Lass

Exploring skeptic tales.

23 Responses to “McI – Mann Face Off : MBH98/99”

  1. Rattus Norvegicus Reply March 7, 2010 at 10:40 pm

    Yep. You’ve pretty much got it right.

    As far as the issues go:

    1) This has been shown not to matter. Short centered, conventional or no, the PC analysis method does not matter. In fact, by 2003 Mann was already investigating the properties of other methods.

    2) I believe that Mann corrected this in his corrigendum issued in 2004. For infilling, Mann seems to have used bog simple methods — on of McI’s pet peeves is the infilling of three (!) years of data at the beginning of the Gaspe series. My understanding is that Mann just used the values from the earliest year of the series to infill the missing data. Other missing data used similar naive infilling methods.

    3) Hughes is an expert in BCPs. I would take his scientific opinion over that of McI any day. Subsequent work by Salzer, et. al. (PNAS, 2009) has upheld the validity of the BCP cores.

    Zorita’s concerns about low frequency variability have held water and have been confirmed by subsequent work, including work by Mann himself. Other work by Zorita and von Storch has not fared so well due to problems with the noise models they used and incorrect spin up procedures on their model runs.

  2. Subsequent work by Salzer, et. al. (PNAS, 2009) has upheld the validity of the BCP cores.

    Including so-called stripbark BCPs, as was mentioned on another thread (though Ron Cram insists it’s a “bad paper”, undoubtably because McIntyre has declared it to be a “bad paper”).

  3. One of the criticisms McI raises with Salzer et al is that the new result merely shifts the divergence from the 20th century to the 19th century, as McI showed in his blog post on the issue which discusses the Salzer paper and references the Ababneh thesis.

    Here is the McI original post on Ababneh.

    Here is her dissertation.

    From what I understand reading McI’s posts, Ababneh’s thesis updated Sheep Mtn and the results diverged significantly from the Graybill used in MBH.

    He alleges that this has not been addressed in the literature, including Salzer, and that it reveals that Ababneh has been put in ‘witness protection’ – IOW that she has not been speaking about her findings because they refute the big boys.

    So while it is true that the strip bark and whole bark show similar trends in the new work, the new work does not address the problematic Ababneh findings and merely shifts the divergence from the 20th to the 19th century.

    Admittedly, I am not qualified to adjudicate this scientific debate but I would appreciate others with more training (or confidence in their abilities) to weigh in on this.

  4. 1. PC Analysis — short centered vs. conventional PC analysis

    Try Ian Jolliffe on PCA at Tamino’s:
    http://tamino.wordpress.com/2008/08/10/open-thread-5-2/#comment-21873

    McIntyre’s comment on this:

    Ian Jolliffe Comments at Tamino

    2. Data used — which data set was used, and how were missing data filled in.

    It took years for this information to come out, and it is difficult to provide a quick answer on how missing data was filled in. Which data was used was a valid question and even Mann/Rutherford provided incorrect data to McIntyre.

    3. Use of certain proxies, such as BCPs and others.

    The NAS panel recommended not to use BCPs. Big issue, much written about this one.

    • That was a great thread over at Tamino’s. As a non-scientist, my hope is to grasp the forestness of things since I am unqualified to grasp the treeness. 😉 There seems to be much debate over whether PCA should be used on non-stationary data. I assume from Jolliffe that tree rings and other proxy data are non-stationary. It appears that short-segment PCA (does TCO really get credit for that? 🙂 ) used by MBH is novel and perhaps questionable, although it appears that uncentered or non-centered PCA is a variant that can be useful in some specific cases. It appears that the terms uncentered, noncentered and decentered PCA gets mixed up in the analyses over at Tamino, and at CA, adding to the general confusion and sense of he said/she said.

      One take home from that thread is this: one should assume that a person means what they say and if they make a mistake it is an honest one. In other words, give them the benefit of the doubt until proven otherwise. In a highly politicized debate like that over global warming, it is all too easy to assume malice on the part of the other guy when it may be just a simple mistake, a misunderstanding, sloppiness or an honest disagreement over interpretation.

      What is also clear to me since reading McI’s early posts is this: he entered the fray as a non-scientist, a non-climate scientist, a non-dendroclimatologist and applied his values and understanding of the way things should work from a mining explorations perspective, a business perspective, and an auditing perspective. In other words, he applied a whole different template to what he was examining than was developed within that discipline. It’s no wonder it didn’t live up to his expectations.

      It’s fair to ask if climate science lives up to its own set of expectations and values, but then, I suspect that as most science disciplines are made up of humans, rather than gods, there will be occasional mistakes, personal scraps, smallness that you will find outside science — even in business! In my view, ‘skeptics’ aka contrarians and deniers are being disingenuous when they parrot McI’s claims about quality control and engineering level analyses in terms of data and review as proof that climate science is not science. Science has its own set of processes and procedures that have worked very well for a couple centuries — not that it’s perfect, but it has been wildly productive and we all have benefitted. I say, let’s do what we can to ensure scientists can live up to their own standards and realize their values rather than try to overhaul it.

      Second, I would add that the project is wrong-headed for ‘citizens’ — even talented ones — to think they can come in and audit science. Seriously, I reject this notion of citizen audit. Not only is it hubristic to do so in the extreme (how many times have I heard over at CA “I’m not a scientist nor do I have any skills in statistics but right on Steve!”), thinking that all a person needs is his high school (or sophomore level) math and stats to understand and participate as an equal and “audit” climate science. I know this opinion will not go over well with many people but even Jolliffe doesn’t know everything about PCA and he’s a recognized expert! He’s at least humble about it whereas I see many on the contrarian side assuming that if you make an error in applying stats, you must be a charlatan.

      A little respect and modesty might be warranted. Yes, people should educate themselves and try to understand the science as best they can given their education and abilities, and to be open and willing to consider different perspectives. I encourage that. Science is the greatest thing humans have created, IMO. I think it is due our respect and we should recognize that to do science takes a lot of study and practice and research. People like to point to Einstein being a lowly patent clerk when he was writing his papers, but he had a freakin PhD in physics. It’s also interesting to note that Einstein was supposed to study electrical engineering but he rejected that as being too restrictive — he was far too creative to want to be an engineer. 😉 Imagine if that creativity was restricted – if he operated under the constraints imposed by engineering or business.

      /rant

      • I have to add that I just read the Monbiot article and have to agree with what he wrote — it expresses what I think about the climate wars pretty much to a tee. It’s very disheartening to see what is taking place in this age of anti-science. I don’t know what the answer is but the prospects don’t look good. That’s the reality of our brave new world of immediate communication when everyone can be a pundit. It’s both liberating, democratizing and can also spread ignorance as much as knowledge.

  5. Don’t forget Dr Gerald North’s seminar on the NAS panel and the hockey stick, which I posted here:
    https://shewonk.wordpress.com/2010/02/24/enron-and-the-zombie-fungus/#comment-1332

  6. Concerning Ababneh, the two graphs McIntyre shows from Appendix III of her thesis are normalized to a 20 year moving average in order to remove low frequency variability(p. 118). To get an idea of the variability thus removed, look at Fig. 12(a,b,c,d) in Appendix I (pp. 66-68), of which she says, “A comparison of the entire box plots show that the past 30 years has a higher mean than any
    of the previous intervals”.

    McIntyre has confused the issue at every turn. He’s disingenuous at the very least. I believe it is reasonable to say he’s flat out lying.

    What sets the Saltzer, Hughes paper apart, besides the large sample size, is the transect altitude analysis, which lays to rest the strip bark vs. whole bark question as well as effectively dis-aggregating temperature and moisture differences.

    The bcp divergence is fundamentally important to M&M’s non-centered PCA argument in that it rests on Wegman’s conclusion that non-centering may be made invalid by inclusion of spuriously correlated data series, even though analysis done by Ammann and Wahl and others has shown by exclusion that none of the claims of spurious data are significant.

  7. Luminous Beauty:

    Concerning Ababneh, the two graphs McIntyre shows from Appendix III of her thesis are normalized to a 20 year moving average in order to remove low frequency variability

    Thank you for this. Shewonk, in case you’re unaware, the significance is that the RCS analysis used by Briffa and others on tree ring data is meant to *preserve* any low frequency signal that might exist, such as changes due climate variability.

  8. There seems to be much debate over whether PCA should be used on non-stationary data. I assume from Jolliffe that tree rings and other proxy data are non-stationary.

    Well, I found this interesting.

    Go to the 2nd paragraph of the section titled “Derivation of Tree-Ring Indices”, which explains why its nonstationary and work done to essentially transform the data into a stationary series.

    This is back in the 1960s so the field is obviously aware of the problem and at least they think they know how to address it.

    It’s been awhile since I’ve looked at Briffa’s stuff but I know the basic problem is mentioned there, and I don’t see how you can build a meaningful long-term time series without addressing it somehow.

    I don’t know how the BCP proxies in MBH98 were dealt with prior to being analyzed along with other proxies using PCA.

    Note that the explanation of tree ring series being nonstationary doesn’t apply to at least some other proxies, so I don’t know if your “and other proxies” statement is true or not. I assume you have to investigate each proxy series individually.

    I’m by no means an expert, just a dude with a BS in math, so do your own reading and reach your own conclusions, please!

    It appears that the terms uncentered, noncentered and decentered PCA gets mixed up in the analyses over at Tamino, and at CA, adding to the general confusion and sense of he said/she said.

    Not only there, but apparently in a variety of papers using PCA to analyze time series. Joliffe said something like “if there’s a 3rd edition of my book, I’ll add stuff to clarify this …”

    It appears that short-segment PCA (does TCO really get credit for that? ) used by MBH is novel and perhaps questionable

    Questionable, not perhaps questionable. He no longer uses it. I don’t think “questionable” is really the issue, “material” is the issue. And it’s been shown to not be material, whatever the theoretical shortcomings of his technique.

    And Joliffe also said this:

    Almost any decent statistical model-fitting will give the upward trend at the end of the series…

    In the midst of a paragraph where he’s explaining that the mainstream view of AGW is almost undoubtably true regardless of the hockey stick.

    All this underscores to some extent McIntyre’s tendency to draw his sword, wield it mightily, and after successfully scratching his opponent declaring that climate science is like the armless, legless black night in “Monty Python and the Holy Grail”.

    The portion of your post dealing with citizen “auditing” of science, etc, is very good. It’s the kind of thing that could be usefully repackaged and published as an op-ed piece in the daily newspaper of your choice …

  9. …he entered the fray as a non-scientist, a non-climate scientist, a non-dendroclimatologist and applied his values and understanding of the way things should work from a mining explorations perspective, a business perspective, and an auditing perspective. In other words, he applied a whole different template to what he was examining…

    I don’t think that things such as this get said often enough, shewonk, but I also think that you’re being far too charitable. McI isn’t interested in understanding science from within. He has no regard for its self-correcting properties, none for its epistemological subtleties, and most of all, none for its autonomy. It tells us things that we don’t necessarily want to hear, especially if we’re Canadian semi-retired mining executives. It has to be suborned. That’s what he’s been doing – importing the norms of the business world into the sphere of scientific research – the assumptions of dishonesty, incompetence, ideologically-driven question-begging and gold-bricking attributed to anyone regarded as being in the opposing camp. The very name of his blog tells you what his agenda is – “Climate Audit” – if he was interested in the science, it would be something along the line of “Climate Hypothesis Testing”. But I think that the name’s carefully chosen, and it’s meant to suggest that in fact scientists have had it all their own way for far too long, and now it’s time for serious, sceptical adults with none of these lefty tree-hugging tendencies to cast a cold critical eye over what they’ve been up to. It’s meant to suggest that science mustn’t be allowed to define itself. It must be subject to the sorts of norms we find in the legal and business worlds. And he’s done a fine job of promoting this, to date. Maybe I haven’t been paying attention, but I can’t recall McI being challenged very often on the grounds that he simply hasn’t paid his dues and isn’t working within the scientific paradigm. Let alone anyone asking him why, beyond all of the bucks at risk, he’s so keen on “auditing” rather than doing science himself.
    Preaching to the choir. OK, I’m all done now.

  10. Lars:

    reaching to the choir. OK, I’m all done now.

    nice rant, though! 🙂

  11. Lars,

    I find when Steve starts quacking about ‘due diligence’ he reveals his desire to suborn science to business interests. Engineers can only do due diligence because research scientists have already done the heavy lifting of probing the uncertain edges of the unknown.

  12. Lars, I was being charitable. I agree with your comment. What hits you when you read McI’s blog is the utter contempt he seems to have for climate science and climate scientists, but perhaps all science since I don’t really see any evidence that it is any different than other sciences. It really is a wholesale war on science.

    McI claims that he is operating in good faith, but as you point out, even the title of his blog tells the tale about his whole orientation to science. It’s not pro-science, it’s anti. It starts from the position that climate science is fraudulent and his project is to uncover evidence of that fraud, that’s why he refers to Bre-X and Enron. That’s his frame of reference.

    People say that if the science is sound it will withstand the audit, but they are politically naive — or disingenuous. Science is vulnerable — it takes place within a political and economic system which can affect how it is able or unable to do its work. Just look at the list Inhofe created — I don’t like to be an alarmist, but seriously. People should be wary when politicians start bringing scientists up in front of congressional hearings and senators start calling for arresting scientists for doing their jobs. Contrarians and denialists are criminalizing science — they are turning normal scientific disagreements over methods and data and process into crimes, smearing the scientists and inciting the public through talk radio like Rush Limbaugh and other far right wing media to threats against scientists.

    To understand this you don’t really need to understand science. It’s not really about the science – that’s just the cover story. To understand this you need to understand the politics and economics. It’s about people with a political perspective — libertarianism — and economic interests — in the fossil fuel industry — waging a war against science in order to win the policy war. It’s not about truth — it’s war and you know what the first casualty is in war…

  13. Shewonk, a much more lucid and measured expression of what I was trying to say, and you took it further too.

    But I still think that McIntyre should be challenged upon these grounds, frequently and aggressively. He gets away with the “citizen-scientist” schtick far too much. A lot of people don’t know any better, have no epistemological (gotta reclaim that word from the Randies) background, and take him at face value.

    • Lars, the accolades he receives in the press and the label of ‘citizen scientist’ really works to give him credibility in the minds of the public, especially those on the right. I was listening to Richard North’s talk on the hockey stick earlier this evening and even he gives McIntyre a lot of credit for seeing the problems in the statistics. Sure, granted, the short-centered PCA was a good pick up as was the Y2K error and a few other data issues but I would argue when you add it all up, more harm than good. For all the breaking of the MBH hockey stick that’s been accomplished, the science is still intact, global warming is still evidenced in the temperature records and in the physical evidence. Yet, what has happened as a result of the persistent FOIA efforts and hacking away at climate science is probably to delay even further any action on AGW. Being wrong about a statistical method in a paleoclimate paper is one thing; being wrong about the risk from global warming and contributing to policy inaction is a whole other matter.

      I know which mistake I’d rather make.

  14. Late and I must be off home, where there is no internet. But this was my point, shewonk – McI isn’t a scientist, he’s an epistemological wrecker and has to be exposed as such. Every time his name comes up. He’s an enemy of a very important part of the Enlightenment – that might sound a bit overblown (OK, a lot), but when you get right down to it, he’s trying to geld our only certain way of finding out how the Universe works. I think that he should have that hung around his neck every time he surfaces.
    And I’m not sure that he’s such a hot shot on the statistics either. Tamino at Open Mind has discussed some aspects of this. No time to get into this now, and I think that others have covered this elsewhere on this site more ably than I would be able to.

  15. Rattus Norvegicus Reply March 9, 2010 at 9:24 pm

    Lars :
    …but when you get right down to it, he’s trying to geld our only certain way of finding out how the Universe works. I think that he should have that hung around his neck every time he surfaces.

    Perhaps that should be his own balls.

    As Mann stated in one of the emails, “He almost had a point w/ the PCA centering, but as we all know, that doesn’t matter at all in the end.” The reason for this is that it made no difference whatsoever to the conclusions. Yet McIntyre blew this up into something that resulted in a complete takedown of the original paper, even though it had been backed up by 2005 by many other similar studies using differing (although not completely independent) data sets and different analysis methods.

    Steve has often been wrong, sometimes embarrassingly so — witness the recent Yamal kerfuffle — but he has found a couple of real errors. Not that these errors made any difference, but due to his prominence within the denialist community he was able to blow them way out of proportion to their importance.

  16. @Rattus Norvegicus:

    The problem with McIntyre may very well be that he has been elevated in the denialist community, and can no longer go back. He’s been convinced by his cheering crowd that he is right. It has happened before. I always take Peter Duesberg as a good example of someone who raised some valid points in the past, but never got beyond. Now, Duesberg cannot ever accept HIV is the cause of AIDS, regardless of all the evidence, because his claim to fame *is* that he denies HIV is the cause of AIDS. He got so much attention as a denier, that stopping the denial would leave him in a big black hole. I think the same is the case for Steve McIntyre. He can hardly ever admit he’s been really wrong, because he’s become the main focus of a personal cult. With thousands of followers…that’s a really deep hole if they disappear.

  17. For much of the story your need to go to sci.environment and look at the interminable auditing the auditors threads. It turns out that McI was sock puppeting as Nigel Persaud (you could search on that). It took a few days to figure out what Rutherford had done, basically a couple of the series were shifted a year or two in either direction, some were labeled wrong, etc.

    What Eli didn’t do is go back and check McI’s Email to Mann against what he must have known from the sci.environment pages. Some people also checked the PCA analysis.

  18. Eli Rabett :
    It turns out that McI was sock puppeting as Nigel Persaud (you could search on that).

    Here’s one thread. The poster to look for is ..g.. ..rsa..

    http://www.varioustopics.com/environment/660159-m-and-m-one-of-the-most-important-papers-published-in-recent-years.html

  19. Appreciatee your blog post

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.