climateaudit.org Open in urlscan Pro
192.0.78.24  Public Scan

Submitted URL: http://climateaudit.org/
Effective URL: https://climateaudit.org/
Submission: On April 15 via manual from CA — Scanned from CA

Form analysis 3 forms found in the DOM

GET https://climateaudit.org

<form action="https://climateaudit.org" method="get"><label class="screen-reader-text" for="cat">Categories</label><select name="cat" id="cat" class="postform">
    <option value="-1">Select Category</option>
    <option class="level-0" value="79848">AIT</option>
    <option class="level-0" value="197460">Archiving</option>
    <option class="level-1" value="1099">&nbsp;&nbsp;&nbsp;Nature</option>
    <option class="level-1" value="173">&nbsp;&nbsp;&nbsp;Science</option>
    <option class="level-0" value="27936823">climategate</option>
    <option class="level-1" value="7814166">&nbsp;&nbsp;&nbsp;cg2</option>
    <option class="level-0" value="22379">Data</option>
    <option class="level-0" value="1604103">Disclosure and Diligence</option>
    <option class="level-1" value="115816">&nbsp;&nbsp;&nbsp;Peer Review</option>
    <option class="level-0" value="228897">FOIA</option>
    <option class="level-0" value="54">General</option>
    <option class="level-0" value="1604117">Holocene Optimum</option>
    <option class="level-0" value="335">Hurricane</option>
    <option class="level-0" value="167929">Inquiries</option>
    <option class="level-1" value="191042498">&nbsp;&nbsp;&nbsp;Muir Russell</option>
    <option class="level-0" value="49212">IPCC</option>
    <option class="level-1" value="28646286">&nbsp;&nbsp;&nbsp;ar5</option>
    <option class="level-0" value="1604100">MBH98</option>
    <option class="level-1" value="481477">&nbsp;&nbsp;&nbsp;Replication</option>
    <option class="level-1" value="18760">&nbsp;&nbsp;&nbsp;Source Code</option>
    <option class="level-1" value="1604101">&nbsp;&nbsp;&nbsp;Spot the Hockey Stick!</option>
    <option class="level-0" value="19878">Modeling</option>
    <option class="level-1" value="190681959">&nbsp;&nbsp;&nbsp;Hansen</option>
    <option class="level-1" value="16643095">&nbsp;&nbsp;&nbsp;Santer</option>
    <option class="level-1" value="7926170">&nbsp;&nbsp;&nbsp;UK Met Office</option>
    <option class="level-0" value="190682060">Multiproxy Studies</option>
    <option class="level-1" value="1604115">&nbsp;&nbsp;&nbsp;Briffa</option>
    <option class="level-1" value="565087">&nbsp;&nbsp;&nbsp;Crowley</option>
    <option class="level-1" value="28337688">&nbsp;&nbsp;&nbsp;D'Arrigo 2006</option>
    <option class="level-1" value="1604110">&nbsp;&nbsp;&nbsp;Esper et al 2002</option>
    <option class="level-1" value="692438">&nbsp;&nbsp;&nbsp;Hansen</option>
    <option class="level-1" value="1604113">&nbsp;&nbsp;&nbsp;Hegerl 2006</option>
    <option class="level-1" value="28337689">&nbsp;&nbsp;&nbsp;Jones &amp; Mann 2003</option>
    <option class="level-1" value="1604104">&nbsp;&nbsp;&nbsp;Jones et al 1998</option>
    <option class="level-1" value="1604114">&nbsp;&nbsp;&nbsp;Juckes et al 2006</option>
    <option class="level-1" value="28337690">&nbsp;&nbsp;&nbsp;Kaufman 2009</option>
    <option class="level-1" value="28337691">&nbsp;&nbsp;&nbsp;Loehle 2007</option>
    <option class="level-1" value="28337692">&nbsp;&nbsp;&nbsp;Loehle 2008</option>
    <option class="level-1" value="28337694">&nbsp;&nbsp;&nbsp;Mann et al 2007</option>
    <option class="level-1" value="28337695">&nbsp;&nbsp;&nbsp;Mann et al 2008</option>
    <option class="level-1" value="28337696">&nbsp;&nbsp;&nbsp;Mann et al 2009</option>
    <option class="level-1" value="392846387">&nbsp;&nbsp;&nbsp;Marcott 2013</option>
    <option class="level-1" value="1604125">&nbsp;&nbsp;&nbsp;Moberg [2005]</option>
    <option class="level-1" value="163294838">&nbsp;&nbsp;&nbsp;pages2k</option>
    <option class="level-1" value="28337685">&nbsp;&nbsp;&nbsp;Trouet 2009</option>
    <option class="level-1" value="1604109">&nbsp;&nbsp;&nbsp;Wahl and Ammann</option>
    <option class="level-0" value="152584">News and Commentary</option>
    <option class="level-1" value="22336">&nbsp;&nbsp;&nbsp;MM</option>
    <option class="level-0" value="98266">Proxies</option>
    <option class="level-1" value="28337687">&nbsp;&nbsp;&nbsp;Almagre</option>
    <option class="level-1" value="9635">&nbsp;&nbsp;&nbsp;Antarctica</option>
    <option class="level-1" value="1604105">&nbsp;&nbsp;&nbsp;bristlecones</option>
    <option class="level-1" value="151692">&nbsp;&nbsp;&nbsp;Divergence</option>
    <option class="level-1" value="763097">&nbsp;&nbsp;&nbsp;Geological</option>
    <option class="level-1" value="4778663">&nbsp;&nbsp;&nbsp;Ice core</option>
    <option class="level-1" value="1604106">&nbsp;&nbsp;&nbsp;Jacoby</option>
    <option class="level-1" value="1604112">&nbsp;&nbsp;&nbsp;Mann PC1</option>
    <option class="level-1" value="44421">&nbsp;&nbsp;&nbsp;Medieval</option>
    <option class="level-1" value="1604119">&nbsp;&nbsp;&nbsp;Noamer Treeline</option>
    <option class="level-1" value="1604116">&nbsp;&nbsp;&nbsp;Ocean sediment</option>
    <option class="level-1" value="1604124">&nbsp;&nbsp;&nbsp;Post-1980 Proxies</option>
    <option class="level-1" value="45212">&nbsp;&nbsp;&nbsp;Solar</option>
    <option class="level-1" value="2154509">&nbsp;&nbsp;&nbsp;Speleothem</option>
    <option class="level-1" value="71346">&nbsp;&nbsp;&nbsp;Thompson</option>
    <option class="level-1" value="28337686">&nbsp;&nbsp;&nbsp;Yamal and Urals</option>
    <option class="level-0" value="2201">Reports</option>
    <option class="level-1" value="1604107">&nbsp;&nbsp;&nbsp;Barton Committee</option>
    <option class="level-1" value="1604111">&nbsp;&nbsp;&nbsp;NAS Panel</option>
    <option class="level-0" value="28496063">Satellite and gridcell</option>
    <option class="level-0" value="4493">Scripts</option>
    <option class="level-0" value="1219821">Sea Ice</option>
    <option class="level-0" value="88329">Sea Level Rise</option>
    <option class="level-0" value="5849">Statistics</option>
    <option class="level-1" value="2839816">&nbsp;&nbsp;&nbsp;chladni</option>
    <option class="level-1" value="1604108">&nbsp;&nbsp;&nbsp;Multivariate</option>
    <option class="level-1" value="28337697">&nbsp;&nbsp;&nbsp;RegEM</option>
    <option class="level-1" value="1604126">&nbsp;&nbsp;&nbsp;Spurious</option>
    <option class="level-0" value="28634138">Steig at al 2009</option>
    <option class="level-0" value="1604120">Surface Record</option>
    <option class="level-1" value="416165">&nbsp;&nbsp;&nbsp;CRU</option>
    <option class="level-1" value="25605617">&nbsp;&nbsp;&nbsp;GISTEMP</option>
    <option class="level-2" value="28337699">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;GISTEMP Replication</option>
    <option class="level-1" value="1604122">&nbsp;&nbsp;&nbsp;Jones et al 1990</option>
    <option class="level-1" value="332357">&nbsp;&nbsp;&nbsp;SST</option>
    <option class="level-1" value="28337698">&nbsp;&nbsp;&nbsp;Steig at al 2009</option>
    <option class="level-1" value="1604121">&nbsp;&nbsp;&nbsp;UHI</option>
    <option class="level-0" value="1059117">TGGWS</option>
    <option class="level-0" value="1">Uncategorized</option>
    <option class="level-0" value="5663575">Unthreaded</option>
  </select>
</form>

GET https://climateaudit.org/

<form id="searchform" class="blog-search" method="get" action="https://climateaudit.org/">
  <div>
    <input id="s" name="s" type="text" class="text" value="" size="10" tabindex="1">
    <input type="submit" class="button" value="Find" tabindex="2">
  </div>
</form>

POST https://subscribe.wordpress.com

<form method="post" action="https://subscribe.wordpress.com" accept-charset="utf-8" style="display: none;">
  <div class="actnbr-follow-count">Join 3,687 other subscribers</div>
  <div>
    <input type="email" name="email" placeholder="Enter your email address" class="actnbr-email-field" aria-label="Enter your email address">
  </div>
  <input type="hidden" name="action" value="subscribe">
  <input type="hidden" name="blog_id" value="1501837">
  <input type="hidden" name="source" value="https://climateaudit.org/">
  <input type="hidden" name="sub-type" value="actionbar-follow">
  <input type="hidden" id="_wpnonce" name="_wpnonce" value="07295f818b">
  <div class="actnbr-button-wrap">
    <button type="submit" value="Sign me up"> Sign me up </button>
  </div>
</form>

Text Content

CLIMATE AUDIT

by Steve McIntyre
Skip to content
 * Hockey Stick Studies
 * Statistics and R
 * Contact Steve Mc
 * Proxy Data
 * CA blog setup
 * FAQ 2005
 * Station Data
 * High-Resolution Ocean Sediments
 * Subscribe to CA
 * Econometric References
 * Blog Rules and Road Map
 * Gridded Data
 * Tip Jar
 * About
 * CA Assistant

« Older posts



D’ARRIGO ET AL 2006: NWNA ALASKA

Dec 14, 2023 – 5:04 PM

Today’s article is about one of the D’Arrigo et al 2006 datasets.

D’Arrigo et al 2006, then under submission, had been cited in drafts of the IPCC
Fourth Assessment Report. I had been accepted as an IPCC reviewer and, as an
IPCC reviewer, I asked IPCC to make the data available to me or to ask the lead
author to make the data available. That prompted a vehement refusal that I
documented in March 2007 (link).  Readers unfamiliar with the severity of data
obfuscation by climate science community should read that exchange.  (Some
further light on the campaign emerged later in the Climategate emails.

D’Arrigo et al 2006 calculated more than a dozen new regional chronologies, but
refused to archive or provide the digital chronologies until April 2012, more
than six years later (by which time the paleo field purported to have “moved
on”.  Also, in April 2012, more than six years later, D’Arrigo et al provided
information (somewhat sketchy) on which sites had been used in the various
reconstructions, but measurement data for many of the sites was unavailable,
including (and especially) the sites that had been sampled by D’Arrigo, Jacoby
and their associates.  Much of this data was archived in April 2014, a few
months before Jacoby’s death. But even this archive was incomplete.

By then, D’Arrigo et al 2006 was well in the rear view mirror of the paleo
community and there has been little, if any, commentary on the relationship of
the belated and long delayed 2014 data archive to the 2006 article.

In several recent posts, I’ve discussed components of D’Arrigo’s Northwest
Alaska (NWNA) regional chronology, which, prior to 2012, had only been available
in the muddy form shown below.



The NWNA series goes from AD1297 to AD2000 and closes on a high note – as shown
more clearly in the top panel below, which re-plots the post-1800 period of the
NWNA chronology (RCS version; STD version is very similar.)  Also shown in this
figure (bottom panel) is the post-1800 period of the chronology  (ModNegExp )
for the Dalton Highway (ak104) site, the only component of the NWNA composite
with values in the 1992-2000 period (shown to right of red dashed line.)



Look at the difference right of the dashed line at AD1990.  In the underlying
Dalton Highway data, the series ends at almost exactly the long-term average,
whereas the same data incorporated into D’Arrigo’s NWNA regional composite
closes at record or near-record highs for the post-1800 period.

If the 1992-2000 Dalton Highway data doesn’t show record highs for the site
chronology, then it is implausible to claim that it shows record highs for the
regional chronology.  So what’s going on here?

My guess is that the regional chronology has mixed sites with different average
widths and that their rudimentary statistical technique didn’t accommodate those
differences.  If so, this would be the same sort of error that we saw previously
with Marcott et al 2013, in which there was a huge 20th jump without any
increase in component series (simply by a low value series ending earlier.) 
Needless to say, these errors always go in a hockey stick direction.

 

By Stephen McIntyre | Posted in Uncategorized | Tagged D'Arrigo 2006, jacoby,
nwna | Comments (18)


SHEENJEK, ALASKA: A JACOBY-MBH SERIES

Dec 13, 2023 – 3:44 PM

MBH98 used three Jacoby tree ring chronologies from Alaska: Four Twelve (ak031)
– discussed here, Arrigetch (ak032) and Sheenjek (ak033). Sheenjek will be
discussed in this article.

In our compilation of MBH98 in 2003, we observed that the Sheenjek chronology
archived at NOAA Paleo was not the same as the “grey” version used in MBH98. 
 While we used the MBH98 version to benchmark our emulation of the MBH98
algorithm, we used the version archived at NOAA in our sensitivity analysis,
both in our 2003 article and in our early 2004 submission to Nature.  In his
reply to our submission, Mann vehemently protested that the “introduc[tion of]
an extended version of another Northern Treeline series not available prior to
AD 1500 at the time of MBH98” “introduce[d] problems into the important Northern
Treeline dataset used by MBH98”:

> Finally, MM04 introduce problems into the important Northern Treeline dataset
> used by MBH98. Aside from incorrectly substituting shorter versions of the
> “Kuujuag” and TTHH Northern Treeline series for those used by MBH98, and
> introducing an extended version of another Northern Treeline series not
> available prior to AD 1500 at the time of MBH98, they censored from the
> analysis the only Northern Treeline series in the MBH98 network available over
> the AD 1400-1500 interval, on the technicality that it begins only in AD 1404
> (MBH98 accommodated this detail by setting the values for AD 1400-1404 equal)

The other “Northern Treeline series” referred to here was Sheenjek chronology
ak033.crn.  I checked Mann’s assertion alleging that the data was “not available
prior to AD1500 at the time of MBH98”. This was contradicted by NOAA, who
confirmed that the chronology that we had used had been available since the
early 1990s.

In the figure below, I’ve compared three Sheenjek chronology versions:

 * the MBH98 version from 1580-1979 (plus 1980 infill);
 * the ModNegExp chronology (dplR) calculated from measurement data (ak033.rwl),
   which, in this case, has been available since the 1990s. It covers period
   1296-1979.
 * the archived chronology at NOAA (ak033.crn). Also covering the period
   1296-1979.



The issues relating to Sheenjek are different than observed at Four Twelve.

 * The MBH98 and the chronology (rwl) freshly calculated from measurement data
   using ModNegExp option (emulating contemporary Jacoby technique) are very,
   very similar for their period of overlap (1580-1979).  Neither show elevated
   20th century values or a closing uptick. If anything, a modest decline in
   late 20th century.
 * however, the MBH98 version excludes all values prior to AD1580.  There is no
   good reason for this exclusion. There are 28 cores in the ak033.rwl in 1579,
   far above usual minimums.  In the 15th century, there are more cores for
   Sheenjek than for the Gaspe series which was used by MBH98 in its AD1400
   network, even when it only had one core. (And even no cores for the first
   five years.)
 * the Sheenjek chronology archived at NOAA (ak033.crn) was clearly derived from
   the ak033.rwl dataset, as the series in middle and bottom panels are highly
   correlated. However, from its appearance, it looks like the archived Sheenjek
   chronology was calculated with very flexible splines (rather than “stiff”
   ModNegExp) and that this has attenuated the “low frequency” variability
   observed in the middle panel using ModNegExp option.
 * We used the ak031.crn version in our sensitivity study. If the same exercise
   was repeated using the middle panel version, it would yield relatively high
   early 15th century results.

It is not presently known who chopped off Sheenjek values prior to AD1580 in the
MBH98 version. Or why.

All cores in the Sheenjek dataset were included in D’Arrigo et al 2006 NWNA
Composite.

 

By Stephen McIntyre | Posted in Uncategorized | Tagged jacoby, MBH98, sheenjek |
Comments (6)


FOUR TWELVE, ALASKA: A JACOBY SERIES

Dec 13, 2023 – 1:25 PM

Four Twelve (Alaska) was one of the 11 Jacoby and D’Arrigo series used in MBH98.
In our original 2003 article, we observed that the MBH98 version of this
chronology differed substantially from the chronology officially archived at
NOAA, and, in our sensitivity study, used the archived version (after using the
MBH version for benchmarking.)  Among other things, Mann objected vehemently to
the very idea of the sensitivity analysis that we had carried out:

> An audit involves a careful examination, using the same data and following the
> exact procedures used in the report or study being audited.  McIntyre and
> McKitrick (“MM03”) have done no such thing, having used neither the data nor
> the procedures of MBH98. Their effort has no bearing on the validity of the
> conclusions reported in MBH98, and is no way a “correction” of that study as
> they claim. On the contrary, their analysis seriously misrepresents MBH98. 





However, the different Jacoby versions were a secondary issue in the
contemporary debate and the inconsistency between MBH98 and Jacoby versions
wasn’t pursued further at the time.  

Analysis was further frustrated by peculiar inconsistencies in the Jacoby
archive itself.  For Four Twelve (and several other sites), the archived
chronology ended in 1990, whereas archived measurement data ended in 1977.  The
period of the archived measurement data was consistent with the period of the
MBH98 version of Four Twelve (treeline1.dat), but there was no measurement
archive corresponding to the archived chronology.  It was the sort of dog’s
breakfast that was all too typical.  Jacoby’s death-bed archive once again
provides an answer (as discussed below).

First, here is a comparison of the MBH98 chronology (treeline1.dat) versus the
chronology calculated from the ak031.rwl measurement data (covering exactly the
same period) using Bunn’s ModNegExp option (which corresponds to contemporary
Jacoby methodology).  The two chronologies are highly correlated and cover the
same period, but the elevated mid-20th century values of the MBH98 version were
not replicated.   I presume that the MBH98 version came from Jacoby and/or
D’Arrigo and that this version was used in Jacoby and D’Arrigo 1989 as well. 
Mann’s composite of Jacoby and D’Arrigo treeline series was also used for the
MBH99 bodge of the North American PC1 (to shave down the blade to “get” a
passing RE – as Jean S showed long ago).



One of the “new” measurement datasets in Jacoby’s death-bed 2014 archive was
ak109.rwl, described as a Four Twelve Update. It covered exactly the same period
as the ancient ak031.crn chronology (1524-1990). Application of ModNegExp
chronology algorithm yielded an almost exact replication of the archived
chronology, as shown below.  This confirms that (1) that the ak031.crn
chronology was derived from ak109.rwl measurement data – an inconsistency unique
to the Jacoby treeline data; (2) the ModNegExp algorithm is a reliable
equivalent to the methodology used by Jacoby for the chronologies archived in
the early 1990s.



Inconsistent Information on Updates

In the early 1990s, Jacoby updated multiple sites in the northern treeline
network published in 1989.  In this article, I’ve commented on the Four Twelve
Update in 1990, for which the chronology was archived in the early 1990s, but
the measurement data not until 2014, more than 20 years later and more than 30
years since the original collection.

A much more troubling example (cited in early Climate Audit articles) was the
corresponding update for the Gaspe, also carried out in the early 1990s, where
the measurement data yielded a totally result than the big bladed hockey stick
used in MBH98, but was withheld by Jacoby et al for a further 20+ years until a
few months before Jacoby’s death.   

D’Arrigo et al 2006 NWNA Composite

Four Twelve (Alaska) is one of four sites that contribute to the D’Arrigo et al
2006 Northwest Alaska (NWNA) Composite, illustrated below. However, the NWNA
Composite goes up in its closing period, as opposed to the closing decline of
both Four Twelve versions.  Curiously, the NWNA Composite only uses the second
(1990) tranche of Four Twelve measurement data, excluding the original (1970s)
tranche, whereas for nearby Arrigetch, it incorporated both tranches. 

An odd inconsistency. I’ll look at the D’Arrigo NWNA Composite in due course. 

 

 

 



By Stephen McIntyre | Posted in Uncategorized | Comments (1)


DISCOVERY OF DATA FOR ONE OF THE “OTHER 26” JACOBY SERIES

Dec 12, 2023 – 1:32 PM

We’ve long discussed the bias imparted by ex post selection of data depending on
whether it went up in the 20th century.  Likening such after-the-fact selection
to a drug study carried out only on survivors.

The Jacoby and d’Arrigo 1989 network was a classic example: the original article
reported that they had sampled 36 northern treeline sites, from which they
selected 10 with the “best record…of temperature-influenced tree growth”, to
which they added a chronology of Gaspe cedars that was far south of the northern
treeline at low altitudes. 



In 2004 and 2005, I made a determined effort (link) to obtain the measurement
data for the 26 sites that weren’t included in the final calculation.  Jacoby
refused. I tried over and over to get this data, but was never successful.

Gordon Jacoby died in October 2014.  In June 2014, a few months prior to his
death, the Lamont Doherty Earth Observatory unit of Columbia University
(Jacoby’s employer) archived a large collection of tree ring data collected by
Jacoby and associates (link).  By then, it was 25 years since publication of
Jacoby and D’Arrigo 1989 and 8 years since publication of D’Arrigo et al 2006.

By then, the paleoclimate community had “moved on” to the seeming novelties of
PAGES2K. A few Jacoby and d’Arrigo series re-appeared in PAGES2K. I wrote a
couple of articles on these new Jacoby and d’Arrigo avatars: on their Central
Northwest Territories (Canada) series in January 2016 here; and on their Gulf of
Alaska series in February 2016 here and here. But the articles attracted little
interest. Jacoby and D’Arrigo had successfully stonewalled availability of data
until no one was interested any more.  Not even me.

However, while recently refreshing myself on ancient MBH98 issues, I discovered
something interesting: buried in the dozens of measurement data sets in the
belated 2014 archive was one of the datasets that Jacoby had withheld back in
2004. (Thus far, I’ve only found one, but there may be others.)  It was a
northwest Alaska dataset collected in 1979 – .   What did the withheld data
show? Despite the passage of time, I was interested.

Long-time readers will undoubtedly recall Jacoby’s classic data refusal:

> We strive to develop and use the best data possible. The criteria are good
> common low and high-frequency variation, absence of evidence of disturbance
> (either observed at the site or in the data), and correspondence
> or correlation with local or regional temperature. If a chronology does not
> satisfy these criteria, we do not use it. The quality can be evaluated at
> various steps in the development process. As we are mission oriented, we do
> not waste time on further analyses if it is apparent that the resulting
> chronology would be of inferior quality.
> 
> If we get a good climatic story from a chronology, we write a paper using it.
> That is our funded mission. It does not make sense to expend efforts on
> marginal or poor data and it is a waste of funding agency and taxpayer
> dollars. The rejected data are set aside and not archived.
> 
> As we progress through the years from one computer medium to another, the
> unused data may be neglected. Some [researchers] feel that if you gather
> enough data and n approaches infinity, all noise will cancel out and a true
> signal will come through. That is not true. I maintain that one should not add
> data without signal. It only increases error bars and obscures signal.
> 
> As an ex- marine I refer to the concept of a few good men.
> 
> A lesser amount of good data is better without a copious amount of poor data
> stirred in. Those who feel that somewhere we have the dead sea scrolls or an
> apocrypha of good dendroclimatic data that they can discover are doomed to
> disappointment. There is none. Fifteen years is not a delay. It is a time for
> poorer quality data to be neglected and not archived. Fortunately our improved
> skills and experience have brought us to a better recent record than the 10
> out of 36. I firmly believe we serve funding agencies and taxpayers better by
> concentrating on analyses and archiving of good data rather than preservation
> of poor data.

They may also recall Rosanne D’Arrigo’s remarkable 2006 presentation to a
dumbfounded NAS Panel, to whom she explained that you had to pick cherries if
you want to make cherry pie, as I reported at the time (link):

> D’Arrigo put up a slide about “cherry picking” and then she explained to the
> panel that that’s what you have to do if you want to make cherry pie. The
> panel may have been already reeling from the back-pedalling by Alley and
> Schrag, but I suspect that their jaws had to be re-lifted after this. Hey,
> it’s old news at climateaudit, but the panel is not so wise in the ways of the
> Hockey Team. D’Arrigo did not mention to the panel that she, like Mann, was
> not a statistician, but I think that they already guessed.

D’Arrigo et al (2006) was relied upon by both NAS Panel and IPCC AR4, but, once
again, D’Arrigo refused to provide measurement data – even when politely asked
by Gerry North, chair of the NAS Panel.

SUKAK PEAK (AK106)

The measurement data for  ak106.rwl ( link), Sukak Peak, Alaska, showed that it
had been sampled in 1979.  It was at the same latitude (67-68N) in NW Alaska as
the three Alaska sites used in Jacoby and D’Arrigo 1989 (Four Twelve, Arrigetch,
Sheenjek) and was located about halfway between Arrigetch (151W) and Sheenjek
(144W).

It seems virtually certain that this was one of the “other 26” sites that Jacoby
had sampled prior to Jacoby and D’Arrigo 1989, but had excluded from the study
and then vehemently refused.

Here is a chronology for Sukak Peak (ak106) using Andy Bunn’s dplR (ModNegExp
option to emulate Jacoby methodology), produced using Bunn’s dplR plot function:
chronology in solid line (left axis scale), core counts in light grey (right
axis scale):



First, the chronology (dark line) had elevated values in the AD1100s; its 20th
century values were unexceptional and declined through the 20th century, with
closing values indistinguishable from long-term average.  It definitely doesn’t
tell the “climatic story” that Jacoby was trying to tell.

Second, and this is a surprise (or maybe not), the core counts – shown in solid
light grey in the above graphic – show that Sukak Peak had 10 cores by AD1311
and was at 5 cores by AD1104. In contrast, the entire Jacoby network
incorporated into MBH98 had only one core (from Gaspe) prior to AD1428 and none
prior to AD1404,In other words, although this had been withheld by Jacoby,
replication at this site was better than at any other Jacoby and D’Arrigo site
used in MBH98. It was not “lower quality” in any objective sense.

Although Sukak Peak data was still unarchived and unpublished in 2006, it was
used in the D’Arrigo et al 2006 NW Alaska Composite dataset, the chronology of
which reported high late 20th century values – the opposite to what is displayed
in this component. The NWNA Alaska Composite also included subsets of Four
Twelve, Arrigetch, Sheenjek (none of which show high late 20th century values)
and a later dataset from Dalton Highway which I’m presently unfamiliar with.  I
will take a look at this dataset in a follow up post.

In closing, I had long presumed that data for the “other 26” Jacoby and D’Arrigo
northern treeline sites had disappeared forever.  But it turns out that data for
one of the sites was archived in 2014 – 35 years after collection in 1979, 25
years after publication of Jacoby and D’Arrigo 1989 and a mere 16 years after
publication of MBH98.

Plus another 9 years before anyone noticed that Jacoby’s death-bed archive
contained one of the long withheld “other 26” sites.  A pleasant surprise
nonetheless.  But definitely not a surprise to discover that the withheld data
did not have a hockey stick shape.

 

By Stephen McIntyre | Posted in Uncategorized | Tagged alaska, d'arrigo, jacoby,
MBH98, treeline | Comments (39)


MBH98 WEIGHTS – AN UPDATE

Nov 26, 2023 – 2:10 PM

In numerous ancient Climate Audit posts, I observed that all MBH98 operations
were linear and that the step reconstructions were therefore linear combinations
of proxies, the coefficients of which could be calculated directly from the
matrix algebra (described in a series of articles.)   Soderqvist’s
identification of the actual proxies enables calculation of the AD1400 weights
by regression of the two “glimpses” of the AD1400 step (1400-1449 in the spliced
reconstruction and 1902-1980 in the Dirty Laundry data) against the proxy
network. The regression information is shown in an Appendix at end of this
post. 

The figure below shows the weights for (scaled) proxies as follows: left –
weights from my previous (ancient) calculations from “first principles”; right –
from regression of reconstruction “glimpses” against Soderqvist identification
network.

I haven’t yet tried to retrace my linear algebra using the new identification.
The linear algebra used in the diagram at left also reconciles to five nines to
the Wahl-Ammann calculation. So it can safely be construed as the weights for
the AD1400 network as listed in the Nature SI, but not the actual MBH98 network,
the weights of which are shown on the right.  



Within the overall similarity, there are some interesting differences in weights
arising from the use of four lower order NOAMER (pseudo-) PCs rather than four
tree ring series from Morocco and France.  The problematic Gaspe series (what
Mark Steyn referred to in his deposition as the “lone pine”) receives nearly
double the weighting in the MBH98 data as actually used, as opposed to the
incorrect listing at Nature.  Also, the NOAMER PC6 is almost as heavily weighted
as the notorious Mannian PC1.  It will be interesting to see how heavily the
Graybill stripbark bristlecones and other data that Mann had analysed in his
CENSORED directory feature in this other heavily weighted PC. My guess is that
combination of principal components and inverse regression will show the heavy
weighting of stripbark bristlecones and downweighting of other data that we
pointed out almost 20 years ago. 

The contribution of North American individual species to the MBH AD1400
reconstruction can be calculated from the eigenvectors.  In the Mannian PC1,
nearly all of the sites (and thus species) have positive coefficients, though,
as discussed many years ago, the stripbark species (bristlecones PILO and PIAR;
foxtail PIBS) are the most heavily weighted.  When six PCs are used in the MBH98
algorithm, Engelmann spruce (PSME) are flipped to a negative orientation. 



Appendix

Below is the output from a simple regression of MBH98 AD1400 “glimpses”
(AD1400-1449 from the splice and AD1902-1980 from the Dirty Laundry data)
against Soderqvist’s identification of the actual network. The R^2 of the
reverse engineering is 0.9999 with significance less than 2e-16 for all but one
proxy (seprecip-nc).  A small bit of untidiness with seprecip-nc, but de
minimis.



Also, for reference, here is a diagram of AD1400 proxy weights from late 2007
(link).  I discussed these weights in a couple of subsequent presentations. 



 

 

By Stephen McIntyre | Posted in Uncategorized | Tagged MBH98, soderqvist,
weights | Comments (58)


MANN’S OTHER NATURE TRICK

Nov 24, 2023 – 2:51 PM

In today’s post, I will report on some excellent work on MBH98 by Hampus
Soderqvist, who discovered an important but previously unknown Mike’s Nature
Trick: Mann’s list of proxies  for AD1400 and other early steps was partly
incorrect (Nature link now dead – but see  NOAA or here).  Mann’s AD1400 list
included four series that were not actually used (two French tree ring series
and two Moroccan tree ring series), while it omitted four series that were
actually used.  This also applied to his AD1450 and AD1500 steps.  Mann also
used an AD1650 step that was not reported.

Soderqvist’s discovery has an important application.

The famous MBH98 reconstruction was a splice of 11 different stepwise
reconstructions with steps ranging from AD1400 to AD1820. The proxy network in
the AD1400 step (after principal components) consisted 22 series, increasing to
112 series (after principal components) in the AD1820 step.  Mann reported
several statistics for the individual steps, but, as discussed over and over,
withheld the important verification r2 statistic.  By withholding the results of
the individual steps, Mann made it impossible for anyone to carry out routine
statistical tests on his famous reconstruction.

However, by reverse engineering of the actual content of each network,
Soderqvist was also able to calculate each step of the reconstruction – exactly
matching each subset in the spliced reconstruction.  Soderqvist placed his
results online at his github site a couple of days ago and I’ve collated the
results and placed them online here as well.  Thus, after almost 25 years, the
results of the individual MBH98 steps are finally available.

Remarkably, Soderqvist’s discovery of the actual composition of the AD1400 (and
other early networks) sheds new light on the controversy about principal
components that animated Mann’s earliest realclimate articles – on December 4,
2004 as realclimate was unveiled. Both articles were attacks on us (McIntyre and
McKitrick) while our GRL submission was under review and while Mann was seeking
to block publication. Soderqvist’s work shows that some of Mann’s most vehement
claims were untrue, but, oddly, untrue in a way that was arguably unhelpful to
the argument that he was trying to make. It’s quite weird.

Soderqvist is a Swedish engineer, who, as @detgodehab, discovered a remarkable
and fatal flaw in the “signal-free” tree ring methodology used in PAGES2K (see X
here).  Soderqvist had figured this out a couple of years ago. But I was unaware
of this until a few days ago when Soderqvist mentioned it in comments on a
recent blog article on MBH98 residuals.

THE STEPWISE RECONSTRUCTIONS

Mann et al (1998) reported that the reconstruction consisted of 11 steps and, in
the original SI (current link), reported the number of proxies (some of which
were principal component series) for each step – 112 in the AD1820 network and
22 in the AD1400 network.  As we later observed, the table of verification
statistics did not include Mann’s verification r2 results. Verification r2 is
one of the most commonly used statistics and is particularly valuable as a check
against overfitting in the calibration period.



Although Mann claimed statistical “skill” for each of the eleven steps, he did
not archive results of the 11 individual step reconstructions. In 2003, we
sought these results, ultimately filing a formal complaint with Nature. But, to
its continuing discredit, Nature supported Mann’s withholding of these results. 
Despite multiple investigations and litigations, Mann has managed to withhold
these results for over 25 years.

Nor did Mann’s original SI list the proxies used in each step.  In April 2003, I
asked Mann for the location of the FTP site containing the data used in MBH98. 
Mann replied that he had forgotten the location but his associate Scott
Rutherford would respond. Subsequently, Rutherford directed to me to a location
on Mann’s FTP site which contained a collation of 112 proxies (datestamped July
2002), of which many were principal component series of various tree ring
networks. It’s a long story that I’ve told many times. In the 1400-1449 period
of Rutherford’s collation, there were 22 “proxies” including two North American
PCs.

In October 2003 (after asking Mann to confirm that the data provided by
Rutherford was the data actually used in MBH98), we published our first
criticism of MBH98. Mann said that we had used the “wrong” data and should have
asked for the right data. Mann also provided David Appell with a link to a
previously unreported directory at Mann’s FTP site, most of which was identical
to the directories in the Climategate zipfile that Soderqvist subsequently used.
This FTP location was dead from at least 2005 on and there is no record of it in
the Wayback Machine. (Its robots.txt file appears to have prevented indexing.) 
At the time, Mann also said that MBH98 had used 159 series, not 112 series.  We
asked Mann to identify the 159 series. Mann refused. (There was much other
controversy).

Ultimately, we filed a Materials Complaint with Nature asking them, inter alia,
to (1) require Mann to identify the 159 series actually used in MBH98  and (2)
provide the results of the individual steps (described as “experiments” in the
SI).  Nature, to its shame, refused to require Mann to provide the results of
the individual steps (which remain withheld to this day), but did require him to
provide a list of the proxies used in each step.   In the AD1400 network, it
included the four French and Moroccan tree ring series and two North American
PCs. This list was published in July 2004 and has been relied on in subsequent
replication efforts.

Although Mann refused to provide results of individual steps, the archived
reconstruction (link) is a splice of the 11 steps, using the results of the
latest step where available. Its values between 1400 and 1449 thus provides a
50-year glimpse of the AD1400 reconstruction. This is a long enough period to
test whether any proposed replication is exact. (I recently noticed that the
Dirty Laundry data in the Climategate archive provides a second glimpse of
values between 1902 and 1980 for the AD1400 and AD1600 networks.)

At different times, McIntyre-McKitrick, Wahl-Ammann and Climate Audit readers
Jean S and UC tried to exactly replicate the individual steps in the spliced
MBH98 results, but none of us succeeded. When Wahl-Ammann published their code,
I was able to reconcile their results to our results to five nines accuracy
within a few days of their code release (e.g. link, link). It ought to have been
possible to exactly reconcile to MBH98 results, but none of us could do so. The
figure below (from May 2005) shows the difference between the Wahl-Ammann
version and MBH98 version. At times, the differences are up to 1 sigma.  To be
clear, the shape of the replication – given MBH data and methods – was close to
MBH98 values, but there was no valid reason why it couldn’t be replicated
exactly and, given the effort to get to this point, each of us wanted to finish
the puzzle.



In 2006, Wegman wryly observed  that, rather than replicating Mann and
disproving us, Wahl and Ammann had reproduced our calculations.



Around 2007, Jean S and UC both tried unsuccessfully to replicate the MBH98
steps. I had posted up scripts in R in 2003 and 2005. UC posted up a clean
script in Matlab for MBH replication. Eventually, Jean S speculated that Mann’s
list of proxies must be incorrect, but we all eventually gave up.

A few years ago, Soderqvist noticed UC’s script for MBH98 and began reverse
engineering experiments in which he augmented the AD1400 network with other
candidate proxies available in the Climategate documents (mbh-osborn.zip).  This
included many series that were not available in the Nature, NOAA or Penn State
supplementary information (but, at one time, had been in the now dead UVA
archive that had been temporarily available in late 2003 and early 2004, but
unavailable in the SI,)

In October 2021, Soderqvist had determined Mann that the AD1400 and AD1450 proxy
lists were incorrect and contacted Mann pointing out the errors and required
corrections to the SI:

> For the AD 1400 and AD 1450 steps, the reconstruction is not a linear
> combination of the archived proxies. The correct proxy lists can be determined
> by adding available proxies until the reconstruction is in their linear span.
> It turns out that PCs 3 to 6 of the NOAMER network have been replaced with
> proxies that were not used in these time steps. For comparison, the follow-up
> paper “Long-term variability in the El Niño/Southern Oscillation and
> associated teleconnections” lists the first six PCs (table 1, entries 89-94).
> 
> There is also an undocumented AD 1650 step with its own set of proxies. It is
> just the AD 1600 set with some additional proxies.

Instead of issuing a Corrigendum or otherwise correcting the SI, Mann and
associates buried the information deep in a Penn State archive (see link).  The
covering text cited Soderqvist, together with Wahl and Ammann, as “two
emulations” of the MBH98 reconstruction, ostentatiously failing to mention our
original emulation of the MBH98 reconstruction (which exactly reconciled to the
later Wahl-Ammann version: see link; link) or emulations by UC or Jean S, on
which Soderqvist had relied.

Two more years passed.

Earlier this year, I corresponded with and collaborated with Soderqvist
(@detgodehab on Twitter) on his remarkable discovery of a fatal flaw in the
popular “signal-free” tree ring methodology used in PAGES2K and now widely
popular (see X here).

A few days ago, I posted a thread on MBH98 residuals (link) in which I observed
that several datasets connected with notorious Dirty Laundry email contained
1902-1980 excerpts from MBH98 AD1400 and AD1600 steps that had not been
previously identified as such.  Soderqvist commented on the thread, pointing out
(in passing) a quirky Mannian error in calculation of average temperatures that
no one had noticed in the previous 25 years.

Impressed once again by his reverse engineering acumen, I posed (or thought that
I was posing) the longstanding mystery of reverse engineering the actual list of
MBH98 proxies used in the AD1400 step as something that might interest him.  I
even suggested that the NOAMER PC3 might be involved somehow (on the basis that
it was used in the AD1000 step and might have been used in AD1400 step.)



AS it turned out, Soderqvist had not only thought about the problem, but figured
it out. And the PC3 was involved.



The information at his github site showed that four series listed in the SI but
not actually used were two French tree ring series and two Moroccan tree ring
series.  They were also listed in the AD1450 and AD1500 networks, but do not
appear to have been actually used until the AD1600 network.

A few days ago, Soderqvist archived the results of the individual steps at his
github (see link here). I checked his AD1400 results against the 1400-1449
excerpt in the splice version and the 1902-1980 excerpt in the Dirty Laundry
data and the match was exact.  I’ve additionally collated his results are
collected into an xlsx spreadsheet in a second archive here:
https://climateaudit.org/wp-content/uploads/2023/11/recon_mbh-1.xlsx.



So, after all these years, we finally have the values for the individual MBH98
steps that Mann and Nature refused to provide so many years ago.

New Light on An Old Dispute

But there’s another reason why this particular error in listing proxies
(claiming use of two North America PCs, rather than the six PCs actually used)
intrigued me.

During the original controversy, Mann did not merely list use of two NOAMER PCs
in obscure Supplementary Information: he vehemently and repeatedly asserted that
he had used two North American PCs in the AD1400 because that was the “correct”
number to use under “application of the standard selection rules”. It was a
preoccupation at the opening of Realclimate in December 2014, when Mann was
attempting block publication of our submission to GRL.

For example, the very first article (scroll through 2004 archives to page 9
link) in the entire Realclimate archive, dated November 22, 2004 – almost three
weeks before Realclimate opened to the public on December 10, 2004 – is entitled
PCA Details: PCA of the 70 North American ITRDB tree-ring proxy series used by
Mann et al (1998). Mann stated that two North American PCs were used in the
AD1400 network based on “application of the standard selection rules” applied to
short-centered data:



Realclimate opened on December 10, 2004 (link) and, on opening, featured two
attacks on us by Mann (link; link) entitled False Claims by McIntyre and
McKitrick regarding the Mann et al (1998) reconstruction and Myth vs Fact
Regarding the “Hockey Stick“.   Both were dated December 4, 2004.  Mann cited
our Nature submission as the target of his animus.

In these earliest Realclimate articles, (link; link) Mann vehemently asserted
(linking back to the PCA Details article) that they had used two PC series in
the MBH98 AD1400 network by application of Preisendorfer’s Rule N to principal
components calculated using “MBH98 centering” i.e. Mann’s incorrect short
centering:

> The MBH98 reconstruction is indeed almost completely insensitive to whether
> the centering convention of MBH98 (data centered over 1902-1980 calibration
> interval) or MM (data centered over the 1400-1971 interval) is used. Claims by
> MM to the contrary are based on their failure to apply standard ‘selection
> rules’ used to determine how many Principal Component (PC) series should be
> retained in the analysis. Application of the standard selection rule
> (Preisendorfer’s “Rule N’“) used by MBH98, selects 2 PC series using the MBH98
> centering convention, but a larger number (5 PC series) using the MM centering
> convention.

In an early Climate Audit article (link), I tested every MBH98 tree ring and
step using Preisendorfer’s Rule N and was unable to replicate the numbers of
retained PCs reported in the SI using that rule.

Soderqvist’s discovery that MBH98 used six North American PCs not only refutes
Mann’s claim that he used two North American PCs, but refutes his claim that he
used Preisendorfer’s Rule N to select two PCs.  Soderqvist’s discovery raises a
new question: how did Mann decide to retain six North American PCs in the
AD1400: it obviously wasn’t Preisendorfer’s Rule N. So what was the procedure?
Mann has never revealed it.

Subsequent to the original controversy, I’ve written many Climate Audit posts on
properties of principal components calculations, including (some of what I
regard as the most interesting) Climate Audit posts on Chaldni patterns arising
from principal components applied to spatially autocorrelated tree ring series. 
The takeaway is that, for a large-scale temperature reconstruction, one should
not use any PCs below the PC1.  The reason is blindingly obvious once stated:
the PC2 and lower PCs contain negative signs for approximately half the
locations i.e. they flip the “proxies” upside down.  If the tree ring data are
indeed temperature “proxies”,  they should be used in the correct orientation.
Thus, no need for lower order PCs. In many important cases, the PC1 is similar
to a simple average of the series.  Lower order PCs tend to be contrasts between
regional groupings.  In the North American network, southeastern US cypress form
a grouping that is identifiable in the PC5 (centered) and, needless to say, the
stripbark bristlecones form another distinct grouping.

He then observed that, under MM05 (correct) centering, the “hockey stick”
pattern appeared in the PC4.  For the subsequent inverse regression step of
MBH98 methodology, it didn’t matter whether the hockey stick pattern appeared in
the PC1; inclusion even as a PC4 was sufficient to impart a hockey stick shape
to the resulting reconstruction:

> Although not disclosed by MM04, precisely the same ‘hockey stick’ PC pattern
> appears using their convention, albeit lower down in the eigenvalue spectrum
> (PC#4) (Figure 1a). If the correct 5 PC indicators are used, rather than
> incorrectly truncating at 2 PCs (as MM04 have done), a reconstruction similar
> to MBH98 is obtained

Being a distinct regional pattern does not prove that the pattern is a
temperature proxy.  “Significance” under Rule N is, according to Preisendorfer
himself, merely a “attention getter, a ringing bell… a signal to look deeper, to
test further”.  See our discussion of Preisendorfer here.

> The null hypothesis of a dominant variance selection rule [such as Rule N]
> says that Z is generated by a random process of some specified form, for
> example a random process that generates equal eigenvalues of the associated
> scatter [covariance] matrix S…  One may only view the rejection of a null
> hypothesis as an attention getter, a ringing bell, that says: you may have a
> non-random process generating your data set Z. The rejection is a signal to
> look deeper, to test further.

Our response has always been that the relevant question was not whether the
hockey stick pattern of the stripbark bristlecones was a distinctive pattern
within the North American tree ring network, but whether this pattern was local
and specialized, as opposed to an overall property; and, if local to stripbark
bristlecones, whether the stripbark bristlecones were magic world thermometers.
The 2006 NAS panel recommended that stripbark bristlecones be avoided in
temperature reconstructions, but their recommendation was totally ignored.  They
continued in use in Mann et al 2008, PAGES2K and many other canonical
reconstructions, none of which are therefore independent of Mann et al 1998-99.

While most external attention on MBH98 controversy has focussed on principal
component issues, when I reviewed the arc of Climate Audit posts in 2007-2008
prior to Climategate, they were much more focused on questions pertaining to
properties of the inverse regression step subsequent to the principal components
calculation and, in particular, to overfitting issues arising from inverse
regression.  Our work on these issues got sidetracked by Climategate, but there
is a great deal of interesting material that deserves to be followed up on.

By Stephen McIntyre | Posted in Uncategorized | Tagged MBH98, preisendorfer,
soderqvist | Comments (52)


MBH98 CONFIDENCE INTERVALS

Nov 10, 2023 – 10:55 AM

Continued from here.

The Dirty Laundry residual datasets for AD1000, AD1400 and AD1600 were each
calculated using Mann’s “sparse” instrumental dataset, but the resultant sigmas
and RE(calibration) statistics don’t match reported values.   In contrast, the
Dirty Laundry residual dataset for the AD1820 step, which was calculated by Tim
Osborn of CRU because Mann “couldn’t find” his copy of the AD1820 residual
data,  used a different MBH98 target instrumental dataset – the “dense”
instrumental series.

Question: is it possible that Mann had two versions of the residual data: sparse
and dense? And that he chose the dense version for MBH98 statistics (sigma,
RE_calibration) because it yielded “better” statistics, but inadvertently sent
the sparse version (with worse values) to Osborn?

This appears to be exactly what happened. If one uses the Dirty Laundry values
for the reconstruction in 1902-1980 versus the MBH98 dense temperature series,
one gets an exact replication of reported MBH98 calibration RE and sigma
(standard error of residuals) for the AD1400 and AD1600 step and reported MBH99
calibration RE for the AD1000 step.



Conclusion: We KNOW that MBH98 calculated residual series using the sparse
target because they were sent to Osborn in the Dirty Laundry email and shown in
the MBH99 submission Figure 1a.  We KNOW that MBH98 calculated residual series
using the dense target because of the reported RE_calibration and sigma values
in MBH98.  The corollary is that MBH98 calculated two sets of residual series
and then selected the “better” values for display without disclosing the worse
values. Or the selection operation.

MBH99 confidence intervals are related to MBH98 confidence intervals, but
different. They were a longstanding mystery during the heyday of Climate Audit
blog.  In next post, I’ll review MBH99 confidence intervals. We’re a bit closer
to a solution and maybe a reader will be able to figure out the balance.

Over and above, this particular issue is another even more fundamental issue:
the use of calibration period residuals to estimate confidence intervals when
there is a massive failure of verification period r^2 values.  Prior to
Climategate, I had written several posts and comments in which I had raised the
issue and problem of massive overfitting in the calibration period through a
little discussed MBH98/99 step involving a form of inverse regression. (Closer
to PLS regression than to OLS regression – some intuitions of OLS practitioners
have to be set aside.)   There are some very interesting issues and problems
arising from this observation. And even some points of potential mathematical
interest. I’ll try to elaborate on this in a future post.

Postscript

There is a substantial and surprisingly large difference between the two MBH98
target instrumental series (see diagram below).  The sparse series, according to
MBH98, is the “subset of the gridded data (M′ = 219 grid-points) for which
independent values are available from 1854 to 1901″; the dense series is
calculated for 1902-1980 from 1082 gridcells.  In the 19802-1980 (MBH
calibration) period, there is considerably more variability in the sparse
series..

 



 

 

 

By Stephen McIntyre | Posted in Uncategorized | Tagged Confidence interval,
dirty laundry, MBH98, RE | Comments (6)


“DIRTY LAUNDRY” RESIDUALS

Nov 8, 2023 – 8:31 AM

Continued from previous post link.

The data associated with the Climategate “dirty laundry” email had other
interesting information on Mann’s calculation of confidence intervals and the
related calculation of RE statistic.  This post draws heavily on offline
comments by Jean S and UC, both long before and after Climategate.

The left panel below is Tim Osborn’s summer 2003 plot of the AD1000 residuals in
one of the “dirty laundry” datasets sent to him by Mann. It matches the AD1000
data in the right panel – Figure 1a in the submission version of Mann et al
1999.  UC had noticed this figure in submission version in 2006 or so.



While the plot of calibration residuals was not carried forward into the
published version of Mann et al 1999, an identical figure showing the spectrum
of calibration residuals appears in both versions (see below). This almost
certainly precludes an unreported switch of the calculation of calibration
residuals between submission version and publication (though, with Mann et al,
nothing can be excluded.)



Here’s the rub: the standard errors (RE_calibration) values reported in MBH98
and MBH99 are lower (much higher) than values calculated using the Dirty Laundry
data.

The RE_NH_cal(ibration) values for MBH98 were reported in its original
statistical SI (link) and, for MBH99, in its running text. The MBH98 sigmas
(standard error of residuals) for each step can be extracted from the archived
stepwise reconstruction mannnhem.dat (NOAA link).  The standard error of
residuals in the Climategate “dirty laundry” datasets (AD1000, AD1400, AD1600)
can be trivially calculated. Osborn did so in his August 2003 Climategate I
document entitled Mann uncertainty.docx.  I verified the calculation – values
are shown below.  The calibration RE (RE_cal) is trivially calculated as 1-
(se_residuals/sd_obs)^2. (The standard deviation of the target observation data
used in the above Dirty Laundry datasets is 0.2511.)



Conclusions:

The Dirty Laundry residual datasets do NOT match the reported RE calibration or
sigmas (standard error of residuals) reported for MBH98 (AD1400, AD1600) or the
RE calibration reported for MBH99, even though the Dirty Laundry residuals for
AD1000 match Figure 1 in the MBH99 submission.  The calculation of MBH
confidence intervals was a standing puzzle in pre-Climategate discussion – see
review here – and never fully resolved. While the reported numbers do not match
the data in the Dirty Laundry residual datasets, the glimpses of the underlying
reconstructions in the Dirty Laundry datasets provides data that can be used to
finally resolve the calculation of MBH98 confidence intervals.  More on this in
next post. See here

Footnotes:

(1)  Here is a screengrab of relevant statistics in the original SI for MBH98
(link):



(2) Here is 2005 figure showing MBH98 confidence intervals for each step as
extracted from the reconstruction archive:



Figure S1. Standard error (“sigma”) of MBH98 Reconstruction Steps. Calculated
from confidence intervals for MBH98 reconstruction at NOAA archive here.

By Stephen McIntyre | Posted in Uncategorized | Comments (5)


MANN’S “DIRTY LAUNDRY”

Nov 7, 2023 – 2:26 PM

As the date approached for the Mann-Steyn/Simberg libel trial, I’ve been
reviewing my files on MBH98 and MBH99. It’s about 15 years since I last looked
at these issues. 

While revisiting these issues, I re-examined some data associated with the
notorious “dirty laundry” Climategate email (link)  – excerpt shown at right –
that turns out to provide a glimpse of the long obfuscated results for AD1000,
AD1400 and AD1600 steps over the 1902-1980 period.  I don’t recall this being
noticed at the time.  Even by Jean S or UC.  The identification proved
interesting. 

Attached are the calibration residual series for experiments based on available
networks back to: AD 1000, AD 1400, AD 1600. I can’t find the one for the
network back to 1820! But basically, you’ll see that the residuals are pretty
red for the first 2 cases, and then not significantly red for the 3rd case–its
even a bit better for the AD 1700 and 1820 cases, but I can’t seem to dig them
up. In any case, the incremental changes are modest after 1600–its pretty clear
that key predictors drop out before AD 1600, hence the redness of the residuals,
and the notably larger uncertainties farther back… You only want to look at the
first column (year) and second column (residual) of the files. I can’t even
remember what the other columns are! Let me know if that helps.

The data referred to in this email were located in the Climategate I directory
mbh98-osborn.zip as nh-ad1400-resid.dat etc and were discussed in a draft Osborn
memorandum Mann uncertainty.doc.  The columns are unlabelled.  The second column
is residuals.  The AD1400 example is shown below:

An aside about Mann’s favorite “RE” statistic.  While Mann (and Wahl-Ammann)
hyperventilated about the supposedly unique validation imparted by an RE
statistic, the RE statistic is extremely sensitive to choice of calibration and
verification period – an issue that was never addressed by Mann or Wahl-Ammann.
So, in the above example, if the calibration period were set at 1920-1960 and
verification period at 1961-1980, the RE fails miserably.  If the RE statistic
is so extremely sensitive to the choice of calibration and verification periods,
it entails that the underlying reconstruction does  NOT  possess the claimed
“robustness” or “skill”.  

Continued here.

 

By Stephen McIntyre | Posted in Uncategorized | Tagged Confidence interval,
MBH98, mbh99, osborn, RE | Comments (21)


JDOHIO: WAS MICHAEL MANN EXONERATED BY THE POST-CLIMATEGATE INVESTIGATIONS AS
WAS DECIDED BY THE DC COURT OF APPEALS?

Oct 23, 2023 – 11:05 AM

SM: This article by JD Ohio was submitted and briefly online on February 23,
2018, but, for some reason that I don’t recall, was taken offline. In any event,
as Mann’s ancient libel case wends its way into a DC court, I noticed that it
was still pending.  It deals concisely with a central issue in the case. 

By JDOhio

Analysis of Court of Appeals’ Defamation Opinion Holding That
Climategate Inquiries Exonerated Michael Mann

Foreword

I have followed climate matters for a long time and have been aware of the
inquiries that followed Climategate. So, instinctively, when Michael Mann
claimed that Climategate inquiries exonerated him, I believed the claim was
incorrect. There were four inquiries that the appellate Court focused on (See p.
96 of court opinion which referred to “four separate investigations”) which
accepted Mann’s argument that he had been exonerated. See link to opinion:
https://cei.org/litigation/michael-e-mann-v-national-review-and-cei-et-al The
Court’s identification of the inquiries was confusing, but I will focus on the
main reports that seem to be the basis for the Court’s conclusion. Having
reviewed the inquiries closely, my opinion is still that the investigations did
not exonerate Mann.

Some of the emails are misleading, and the various reports and graphs that are
important to the resolution of this case are very hard to keep track of. If one
attempts to dive in the middle of this dispute without having a clear idea of
the background, it is easy to get sucked down a rathole of confusing and
overlapping studies, graphs, emails and inquiries. The point of this blog post
is to create an accurate reference work that is comparatively easy to follow.
So, although it is somewhat tedious, I have gone into a good amount of detail on
what would otherwise be minor details.

Concise Summary of Findings

Although the Court was not always clear as to what four studies it was looking
at (See *** at end of this post), here is a brief description of my findings
pertaining to the studies most relevant to the Court opinion.

1. Muir Russell Report (also called called the Independent Climate Change
E-mails Review (ICCER): This report was commissioned by the University of East
Anglia (UEA) to look at issues that arose concerning the UEA following the
release of 1073 UEA emails. Although Mann was mentioned in some of the emails,
the real focus was on the academic integrity of the UEA. It could not exonerate
Mann. The House of Commons reviewed Muir Russell, and the Court subsumed Muir
Russell under the United Kingdom House of Commons Report.

2. Oxburgh Report (formally known as the “Science Appraisal Panel of Climatic
Research Unit of University of East Anglia): This report was reviewed by the
House of Commons report. It didn’t even mention Mann or any of his publications.

3. Penn State Two Stage Inquiry: These reports did not closely examine
scientific criticism of Mann’s work, and Penn State totally flubbed the
investigation into whether, at the very least, Mann indirectly took part in an
email deletion scheme when he forwarded an email from Phil Jones to Gene Wahl
asking for the deletion of emails pertaining to the Fourth Assessment Report
(AR4) of the IPCC.

4. National Science Foundation (NSF) Close-Out Memo regarding Penn State
investigation of Michael Mann. This Memo is completely unsubstantiated; it is
not clear who wrote the memo or did the underlying work. Also, although it is
widely believed that it is referring to Penn State and Mann, it never explicitly
names either.

5. EPA Reconsideration of Endangerment Finding: On p. 83 of the opinion, the
Court referred to the EPA as having found that the science underlying the Hockey
Stick was valid. When the EPA did look at Mann specifically, it downplayed his
contributions, and he was only mentioned once in the Reconsideration Report.
(See p. 85 of report) Since the EPA’s consideration of Mann was so skimpy, and
was only briefly mentioned by the Court, I will not discuss it further.

Overview of Important Science and Email Issues
Useful to Understanding the Legal Dispute

I. Problems with Tree Proxies (Divergence)

Around 1960, tree proxies which seemed to be accurate indicators of rising and
falling temperatures began showing declines (less growth and density), when the
instrumental records were showing rising temperatures. There seems to be no
doubt that a number of tree proxies were simply inaccurate after 1960. See
https://climateaudit.org/2008/11/30/criag-loehle-on-the-divergence-problem/
Thus, to the extent that tree proxies were known to be inaccurate it is
sometimes reasonable, with full DISCLOSURE, to splice together old tree proxies
from, say 500 years ago up to 1960 with instrumental records. If you continued
with tree proxies known to be defective, it would obviously be wrong.

The problem with tree proxies raises a huge issue. If they aren’t accurate now,
how do we know that they were accurate four or eight hundred years ago? The
answer is that we don’t know. However, for some reason, a lot of skeptics place
the vast majority of their focus on the instrumental temperatures and not the
fairly easy question dealing with the apparent unreliability of proxies. It
seems to me that the only way anyone can say that today’s global average
temperatures (for example) are, let’s say 2.5 degrees C higher than those in the
10th century is to preface that statement with the qualifier, my best guess is….

II. The Misleading “Hide the Decline” Email

From Phil Jones: “I’ve just completed Mike’s [Mann’s] Nature trick of adding in
the real temps to each series for the last 20 years (i.e., from 1981 onwards)
and from 1961 for Keith’s to hide the decline.”

From: Phil Jones [November 1999]
To: ray bradley ,mann@xxxxx.xxx, mhughes@xxxx.xxx
Subject: Diagram for WMO Statement
Date: Tue, 16 Nov 1999 13:31:15 +0000
Cc: k.briffa@xxx.xx.xx,t.osborn@xxxx.xxx

Dear Ray, Mike and Malcolm,
Once Tim’s got a diagram here we’ll send that either later today or
first thing tomorrow.
I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline. Mike’s series got the annual
land and marine values while the other two got April-Sept for NH land
N of 20N. The latter two are real for 1999, while the estimate for 1999
for NH combined is +0.44C wrt 61-90. The Global estimate for 1999 with
data through Oct is +0.35C cf. 0.57 for 1998.
Thanks for the comments, Ray.

Cheers
Phil

Prof. Phil Jones
Climatic Research Unit Telephone +44 (0) xxxxx
School of Environmental Sciences Fax +44 (0) xxxx
University of East Anglia
Norwich Email p.jones@xxxx.xxx
See https://climateaudit.org/2009/11/20/mike%e2%80%99s-nature-trick/

(See also Climate
Audit https://climateaudit.org/2011/03/29/keiths-science-trick-mikes-nature-trick-and-phils-combo/)

When you first hear the phrase “hide the decline,” it is easy to believe that
the speaker is talking about hiding a real decline in instrumental temperatures.
Instead what Jones is talking about is hiding the decline evident in tree
proxies after approximately 1960. However, if you are going to attempt to have
1,000 year or 1,400 year temperature reconstructions, just a little bit of
thought will make it clear that the tree ring proxies have to be dropped after
1960. On the other hand, there is a large question as to whether it is
worthwhile to do 1000 year reconstructions when the proxies used are known to be
unreliable in today’s world; how is it really possible to know that proxies were
reliable 1000 years ago?

It is true that before the 1998 Hockey Stick introduced by Mann, the divergence
problem had been discussed in was openly discussed in the literature. What Jones
was doing when he spoke of “hide[ing] the decline” was attempting to gloss over
the divergence problem and the decline in temperatures that would be shown by
continuing to use tree proxies when extrapolating temperatures as shown in a
paper written by Keith Briffa of University of East Anglia [UEA] who was part of
the Climatic Research Unit (CRU).

III. Mike’s Nature Trick

This Trick is mentioned in the Nov. 16, 1999 email.  To fully understand it, one
must understand statistical smoothing and be conversant with, and compare, three
different studies.  I am not good at statistics and any explanation I could give
would unduly complicate this post.  So, I am skipping it.  Steve McIntyre was
kind enough to provide his summary, which is attached at the end of this post. 
See ***4

Analysis of “Exoneration” Part of Court of Appeals Decision

The Court of Appeals issued a lengthy, 111 page opinion, holding that Michael
Mann had a valid defamation case to present against Rand Simberg, Rich Lowry,
the National Review , the Competitive Enterprise Institute, and inferentially,
Mark Steyn. (Who did not appeal, but whose case would rise and fall on the case
of the others). The portion of the opinion that I am focusing on is that
portion, from p. 82 to p. 97 in which the Court heavily relied on four
investigations to reach the conclusion that the defendants could have acted with
actual malice in criticizing Mann for the research he did.

My basic conclusion is that the four “investigations/endeavors” did not
thoroughly investigate Mann and that the Court made a clear mistake when it
incorrectly relied on the investigations to allow Mann’s lawsuit to proceed.

A. Some Publications, Resources and Facts That Are Important to the Case.
1. The Court of Appeals Decision
(See https://cei.org/litigation/michael-e-mann-v-national-review-and-cei-et-al )
2. The alleged defamatory columns attached to the end of the decision.
3. The Defendants are not claiming that Mann acted in a criminally fraudulent
manner in the sense that he could have made up numbers. The defendants were
using the term “fraud” in a polemical sense. In common polemical usage,
“fraudulent” doesn’t mean honest-to-goodness criminal fraud. It means
intellectually bogus and wrong.” (See p. 110 of the opinion)
4. MBH 98 (first Hockey Stick paper), MBH 99 (Second Hockey Stick Paper, going
1000 years further) See https://en.wikipedia.org/wiki/Michael_E._Mann See also,
S. McIntyre collection of Hockey Stick publications.

> Hockey Stick Studies


5. WMO Diagram and explanation of Hide the Decline email. Also, IPCC Third
Assessment Graph
https://climateaudit.org/2009/12/10/ipcc-and-the-trick/ For more context, see
http://www.americanthinker.com/articles/2010/02/climategates_phil_jones_confes.html
6. Hide the Decline email:
I’ve just completed Mike’s Nature trick of adding in the real temps to each
series for the last 20 years (i.e., from 1981 onwards) and from 1961 for Keith’s
to hide the decline. See:
https://www.justfacts.com/globalwarming.hidethedecline.asp
7. Phil Jones deletion email request sent to Mann for him to forward to Eugene
Wahl, which Mann did.
“Mike,
Can you delete any emails you may have had with Keith [Briffa] re AR4? Keith
will do likewise… Can you also email Gene [Wahl] and get him to do the same? I
don’t have his new email address. We will be getting Caspar [Ammann] to do
likewise.
Cheers, Phil

**** Mann reply:

“Hi Phil,
… I’ll contact Gene [Wahl] about this ASAP. His new email is: generwahl@xxx talk
to you later, mike

For context, see: https://climateaudit.org/2011/02/23/new-light-on-delete-any-
emails/ and https://climateaudit.org/2011/09/02/nsf-on-jones-
email-destruction-enterprise/

B. Conceptual Errors Made by the Court of Appeals

The Court makes three fundamental errors. First, it assumes that those who label
themselves as investigators really do investigate. Second, it assumes that a
general investigation (assuming arguendo that a real investigation occurred)
into a scientific field of study that finds there was no fraud in the field
exonerates all of those in that field even if any individual’s work was only
tangentially involved, if at all. Third, it assumes that those with advanced
degrees, by virtue of their possession of advanced degrees, are competent and
fair commentators and investigators in an area of much controversy. (See p. 85
of opinion)

Although the Court refers to “eight separate inquiries” (p. 82), in reality it
only focused on four. The Court concluded on p. 96 that:

“We come to the same conclusion as in Nader. In the case before us now, not
one but four separate investigations were undertaken by different bodies
following
accusations, based on the CRU emails, that Dr. Mann had engaged in deceptive
practices and scientific and academic misconduct. Each investigation unanimously
concluded that there was no misconduct.”

For a more detailed explanation of the four reports, one may go to McKitrick.
See
https://www.bing.com/search?q=ross+mckitrick+summary+of+climategate+investigations&qs=n&form=QBRE&sp=-1&pq=ross+mckitrick+summary+of+climategate+investigations&sc=0-52&sk=&cvid=DCB9DE01989A4284AA5A0D887E2E1254
I will discuss the the two UEA sponsored endeavors first, then the House of
Commons report which evaluated them, and then discuss the NSF report.

1. The House of Commons Report

On January 25, 2011, the House of Commons issued its report regarding the
investigations of the Climatic Research Unit of the University of East Anglia.
Essentially what it did was to evaluate the Oxburgh and Muir Russell Reports.
Inferentially, it also independently, in a small way, evaluated climate science
as practiced by the CRU.

With respect to Michael Mann, his name is found three times in the report. See
https://publications.parliament.uk/pa/cm201011/cmselect/cmsctech/444/44410.htm
His name was mentioned twice in connection with two papers he co-authored, and
once in regards to an email that Phil Jones sent him asking Mann to keep matters
dealing with multi-proxy studies secret as between two other climate research
colleagues. (See para. 71 of Report) Although the ethics of Mr. Jones were being
examined, there was no focus on ethics of Michael Mann.

There are numerous scientific and practical issues raised by the report.
However, although Mann was mentioned tangentially, there was no focus whatsoever
on the individual quality of his work or of Mann’s personal ethics.

The report, in a small way, validates climate science by finding that those
working at the UEA were not fraudulently manipulating data and were not
unethically manipulating peer review. However, it in no way focused on Mann.
Thus, there is no way that it exonerated Mann.

2. Oxburgh Endeavor (claimed investigation)

The House of Commons Report devoted virtually all of its attention to examining
the validity of two investigatory (claimed) reports commissioned by the UEA. The
first undertaking was the Science Appraisal Panel of Climatic Research Unit of
University of East Anglia report that was issued April 14, 2010. It is Commonly
known as Oxburgh [Ronald ]Inquiry. See ftn. 62 of
https://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy#Science_Assessment_Panel

It is clear beyond any doubt that the did not clear Michael Mann because it did
not look at his work. Here are excerpts from the actual report:

> “The Panel was set up
> …to assess the integrity of the research published by the [East
> Anglia] Climatic Research Unit [Emphasis added]
> in the light of various external assertions
> … The essence of the criticism that the Panel was asked to address was that
> climatic data had been dishonestly selected, manipulated and/or presented to
> arrive at pre-determined conclusions that were not compatible with a fair
> interpretation of the original data….”
> 
> 2. The Panel was not concerned with the question of whether the conclusions of
> the published research were correct. Rather it was asked to come to a view on
> the integrity of the Unit’s research and whether as far as could be determined
> the conclusions represented an honest and scientifically justified interpretation
> of the data. The Panel worked by examining representative publications by
> members of the Unit and subsequently by making two visits to the University
> and interviewing and questioning members of the Unit…. ”
> 
> 3. The eleven representative publications that the Panel considered in detail are
> listed in Appendix B. The papers cover a period of more than twenty years and
> were selected on the advice of the Royal Society. All had been published in
> international scientific journals and had been through a process of peer review.
> CRU agreed that they were a fair sample of the work of the Unit. [Emphasis
> added]…
> 
> Conclusions [of report]
> 
> ….
> We cannot help remarking that it is very surprising that research in an area that
> depends so heavily on statistical methods has not been carried out in close
> collaboration with professional statisticians….”

It is absolutely clear that this report had nothing to do with Mann and could
not possibly have “exonerated” him. In fact, he was not mentioned in the report,
and the 11 publications that were reviewed did not include any in which Mann was
listed as a contributor. It is astonishing that Mann and his Attorney would make
this argument. See p. 12 of Mann brief of Sept. 3, 2014 and
https://cei.org/litigation/michael-e-mann-v-national-review-and-cei-et-al

3. Muir Russell Report

The Muir Russell report, officially, in Great Britain, called the Independent
Climate Change E-mails Review (ICCER) commissioned by the UEA was extensively
reviewed by the House of Commons. See here. It was a real, although not
completely competent investigation, which issued a report that was 96 pages
long. (As opposed to the Oxburgh report, which was 5 pages) On page 10 in para.
no. 6, it stated in its conclusions that:

> “The [Climategate] allegations relate to aspects of the behaviour of the CRU
> (UEA) scientists, such as their handling and release of data, their approach
> to peer review, and their role in the public presentation of results….”
> 
> ****
> 18. On the allegation of withholding station identifiers we find that CRU
> shouldhave made available an unambiguous list of the stations used in each of
> the versionsof the Climatic Research Unit Land Temperature Record (CRUTEM) at
> the time of publication.We find that CRU’s responses to reasonable requests
> for information were unhelpful and defensive.
> 
> 19. The overall implication of the allegations was to cast doubt on the extent
> to which CRU’s work in this area could be trusted and should be relied upon
> and we find no evidence to support that implication.
> 
> ****
> 
> 22. On the allegation that the phenomenon of “divergence” may not have been
> properly taken into account when expressing the uncertainty associated with
> [proxy] reconstructions, we are satisfied that it is not hidden and that
> the subject is openly and extensively discussed in the literature, including
> CRU papers.
> 
> 23. On the allegation that the references in a specific e-mail to a ‘trick’
> and to ‘hide the decline* in respect of a 1999 WMO report figure show evidence
> of intent to paint a misleading picture, we find that, given its subsequent
> iconic significance (not least the use of a similar figure in the IPCC Third
> Assessment Report), the figure supplied for the WMO Report was misleading. We
> do not find that it is misleading to curtail reconstructions at some point per
> se, or to splice data, but we believe that both of these procedures should
> have been made plain – ideally in the figure but certainly clearly described
> in either the caption or the text.

As, the above quotations make clear, Michael Mann’s work was not the focus of
the investigation, and, although his actions were of moderate importance to some
of the actions of the CRU scientists, his work, in and of itself was only
tangentially scrutinized. For instance on p. 81, the Muir Russell report stated
that Keith Briffa had explained:

> “WA2007 had then shown that the results of MBH98 could be replicated
> very closely using their implementation of the MBH98 methods and using the
> same data.”

However, that statement was diminished in importance by the statement that:

> “Briffa and his colleague Osborn commented that in any case the MBH98 was only
> one of 12 such reconstructions in figure 6.10 in Chapter 6, and does
> not therefore dominate the picture.” (p. 81 Muir Russell Report)

It is worth noting that although skeptics were allowed to make submissions, Muir
Russell relied on Keith Briffa (of the CRU and the lead author) and John
Mitchell (a review editor for Chapter 6) to evaluate the validity of
paleoclimate work in AR4 [Fourth Assessment Report of IPCC] and that since it
was their ultimate product that was being evaluated, they are not neutral,
objective observers.

Thus, any claim that Muir Russell exonerated Mann is clearly false. In one, very
important, aspect, the Report, even considering its limited scope, was very
deficient; it failed to ask Phil Jones whether he deleted emails after Jones
received a FOIA request. See
https://climateaudit.org/2012/02/06/acton-tricks-the-ico/ (The particular email
that raised this issue is discussed in the next section)

4. Penn State Endeavor — Alleged Research Investigation

Because the Penn State endeavor was superficial and did not interview critics of
Mann, it does not deserve to be called an “investigation.” Instead, I am calling
it an endeavor. In the Sixth Edition of Black’s Law Dictionary, the word
“investigate” is defined as:

> “To trace or track; to search into with care and accuracy; to find out by
> careful inquisition; examination; …”

Under Black’s definition, and general usage, what Penn State did was not an
investigation. It did not interview people who had problems with Mann’s work. It
is as if there was an accusation of theft, and the police went only to the
accused thief and asked him if stole anything, and the accused said no. For
there to be a true investigation, people from both sides of the controversy have
to be questioned and interviewed. There was an inquiry report published on Feb.
3, 2010 and a later investigation report filed on June 4, 2010.  About 85% of
the Feb. 3, 2010 report was subsumed into the June 4, 2010 report, so this
commentary will be focused on the June report.

For example, Steven McIntyre, in an Amicus Brief, pp3-4, stated that
“falsification concerns about Mann’s research” included:

 * “Mann’s undisclosed use in a 1998 paper (“MBH98”) of an algorithm which mined
   data for hockey-stick shaped series. The algorithm was so powerful that it
   could produce hockey-stick shaped “reconstructions” from auto-correlated red
   noise. Mann’s failure to disclose the algorithm continued even in a
   2004 corrigendum.”
 * Mann’s misleading claims about the “robustness” of his reconstruction to the
   presence/absence of tree ring chronologies, including failing to fully
   disclose calculations excluding questionable data from strip bark bristlecone
   pine trees.” ….
 * Mann’s deletion of the late 20th century portion of the Briffa temperature
   reconstruction in Figure 2.21 in the IPCC Third Assessment Report (2001) to
   conceal its sharp decline, in apparent response to concerns that showing the
   data would “dilute the message” and give “fodder to the skeptics.”
 * Mann’s insistence in 2004 that “no researchers in this field have ever, to
   our knowledge, ‘grafted the thermometer record onto'” any reconstruction. But
   it was later revealed that in one figure for the cover of the 1999
   World Meteorological Organization (WMO) annual report, the temperature
   record had not only been grafted onto the various reconstructions-and in the
   case of
   the Briffa reconstruction, had been substituted for the actual proxy data.”

For the present purposes, putting aside whose version of the matters alluded to
by McIntyre is correct, at the very least Penn State should have questioned both
Mann and McIntyre closely about the matters discussed above. It failed to do so.
Thus, Clive Crook’s criticism is spot on:

> “The Penn State inquiry exonerating Michael Mann — the paleoclimatologist who
> came up with “the hockey stick” — would be difficult to parody. Three of four
> allegations are dismissed out of hand at the outset: the inquiry announces
> that, for “lack of credible evidence”, it will not even investigate them. …
> Moving on, the report then says, in effect, that Mann is a distinguished
> scholar, a successful raiser of research funding, a man admired by his peers —
> so any allegation of academic impropriety must be false.” See here.

One very important issue that was to be determined by the PSU endeavor concerned
the collusion by Mann and others to destroy email correspondence:

> “Did you engage in, or participate in, directly or indirectly, any actions
> with the intent to delete, conceal or otherwise destroy emails, information
> and/or data, related to AR4, as suggested by Phil Jones?” (See )

This issue was described in detail and put in context, by Stephen McIntyre at
Climate Audit here beginning with Jones’ email on May 29, 2008:

> “[Phil] Jones then notoriously asked Mann to delete his emails, asking Mann to
> forward the request to [Gene] Wahl, saying that Briffa and Ammann would do
> likewise:
> 
> ‘Mike,
> Can you delete any emails you may have had with Keith [Briffa] re AR4? Keith
> will do likewise… Can you also email Gene [Wahl] and get him to do the same? I
> don’t have his new email address. We will be getting Caspar [Ammann] to do
> likewise.
> Cheers, Phil’
> 
> Mann replied the same day as follows:
> 
> ‘ Hi Phil,
> … I’ll contact Gene [Wahl] about this ASAP. His new email is: generwahl@xxx
> talk to you later,
> mike’
> 
> That Mann lived up to his promise to Jones to contact Wahl about deleting the
> emails seems certain. In early 2011, from the report of the NOAA OIG, we
> learned that Wahl (by this time, a NOAA employee), told the NOAA IG that “he
> believes that he deleted the referenced emails at the time.” See here.

It is clear that, at the very least, being charitable to Mann, he indirectly
engaged in : “actions with the intent to delete, conceal or otherwise destroy
emails, information and/or data, related to AR4, as suggested by Phil Jones” Yet
the alleged PSU investigation totally botched this simple, very important issue.

5. National Science Foundation Closeout Memorandum

On page 90 of its opinion, the Court of Appeals referred a National Science
Foundation (NSF) report, which did investigate Mann and in which its
investigators talked to Stephen McIntyre, but did not reference his comments or
the questions that were asked. The report was barely over four pages long and
was unsigned and not dated. See bottom of page here.  The report contained no
indication whatsoever as to who wrote the memo or who performed the tasks that
were identified in the memo. Moreover, neither Penn State nor Michael Mann were
specifically named in the report. In over 30 years of practicing law, I have
never seen such a weird document.

The memo was dense and filled with “bureaucratic speak” which tends to distract
attention from those matters that are pertinent to the opinion of the Court of
Appeals. It is difficult to improve on Steve McIntyre’s summary of the report
from his Amicus Brief (See ), so I will borrow heavily from him. The relevant
portions of his summary were that:

> “The National Science Foundation (“NSF”) spoke to some of Mann’s critics
> (including … [Stephen] Mclntyre), but the report did not name them or discuss
> any of the falsification concerns.
> 
> * Nor was the NSF investigation “broadened” to the extent portrayed by
> the division. Its investigation was limited to misconduct as defined in the
> NSF Research Misconduct Policy, which concerns only “fabrication,
> falsification and plagiarism …  in research funded by NSF.” It stated that
> Mann “did not directly receive NSF research funding as a Principal
> Investigator until late 2001 or 2002.” Because the MBH98 and Figure 2.21
> falsification allegations pre-dated 2001, the NSF had no jurisdiction over
> these allegations.
> 
> * There is no evidence that the NSF “broadened” its investigation to consider
> claims regarding Mann’s unprofessional conduct under Policy AD47 (over which
> it had no jurisdiction).
> 
> * Finally, the NSF (like Penn State) never investigated Mann’s role in getting
> Wahl to delete the most sensitive email correspondence. ” (See p.10 of brief.)

There are three basic points to be made about the NSF memo. First, the memo does
not investigate much of Mann’s work, and so it could not exonerate him from
charges concerning the validity of the whole body of his work. Second, it did
not investigate whether Mann assisted, or encouraged Eugene Wahl to delete
emails, which is an extremely important issue touching on his professionalism
and compliance with the law. Third, the memo is completely unsubstantiated; it
is not clear who wrote the memo or did the underlying work. Without being
familiar with the genesis and the manner in which the memo was written, there is
no way to assess its credibility or the accuracy of its findings.

6. Climategate Emails

On p. 84 of its opinion, the Court referred to 1075 CRU emails and claimed that
investigations of these emails contributed to the exoneration of Mann. (The Muir
Russell Report on p. 26 referred to 1073 emails)

This reliance on investigations of the emails is misplaced for a number of
reasons. First the emails examined were less than .3% of the CRU’s emails. (See
p. 26 of Muir Russell Report) From 1998 on, there would be many more emails
written by Mann at the institutions where he worked that were not sent to the
UEA, and none of these were included in the 1075 emails discussed by the Court.
Second, the Muir Russell Report report found that out of the 1073 emails only
140 involved Mann. (Muir Russell Report p. 26). Third, the one report that
explained its procedures in detail and did appear to take a substantial look at
the emails, the Muir Russell Report, was only examining the emails to determine
how they reflected on the CRU; there was no attempt to focus specifically on
Mann’s culpability or innocence.

7. Legal Sleights of Hand

Since this post is focused mostly on whether, as a factual matter, Mann was
exonerated by the investigations identified by the Court, it is designed to
mostly avoid legal issues and standards. However, there are several instances of
legal misdirections that are closely tied to the exoneration issue. I would like
to highlight them.

First, the Court stated: “Dr. Mann also submitted extensive documentation from
eight separate inquiries that either found no evidence supporting allegations
that he engaged in fraud or misconduct or concluded that the methodology used to
generate the data that resulted in the hockey stick graph is valid and that the
data were not fabricated or wrongly manipulated.” The phrase beginning with “or
concluded” has the effect of shifting the focus from the actions of Mann to
climate science in general. This shift is improper in this case because it is
the actions of Michael Mann that are at issue in the defamation case, not the
validity or invalidity of “mainstream” climate science. For instance, mainstream
climate science could be valid, but Mann, as an individual, could be misapplying
it.

Second, the Court stated: “We set aside the reports and articles that deal with
the validity of the hockey stick graph representation of global warming and its
underlying scientific methodology. The University of East Anglia, the U.S.
Environmental Protection Agency, and the U.S. Department of Commerce issued
reports that concluded that the CRU emails did not compromise the validity of
the science underlying the hockey stick graph.” (See p. 83). This makes no sense
at all because one of the main criticisms of Mann was that he, in some
circumstances, was complicit in the publication of graphs that concealed the
decline in the tree ring density proxies relative to instrumental temperatures.
As previously noted, p. 13 of the Muir Russell Report stated:

> “On the allegation that the references in a specific e-mail to a ‘trick’ and
> to ‘hide the decline* in respect of a 1999 WMO report figure show evidence of
> intent to paint a misleading picture, we find that, given its subsequent
> iconic significance (not least the use of a similar figure in the IPCC Third
> Assessment Report), the figure supplied for the WMO Report was misleading.”

Third, on p. 83 of its opinion, the Court stated that the alleged false
statements that formed a legitimate basis for Mann’s defamation suit were: “that
Dr. Mann engaged in “dishonesty,” “fraud,” and “misconduct.” The undisclosed
concealing of the decline in the IPCC report and the splicing of two different
data sets in the WMO report can certainly be criticized as being “dishonest” or
as evidence of “misconduct.” By putting aside evidence that Mann was involved in
undisclosed manipulation, the Court is unfairly penalizing the defendants, for
potentially, pointing out, at the very least, objectionable behavior by Mann.

Fourth, on p. 84, the Court stated four institutions: “conducted investigations
and issued reports that concluded that the scientists’ correspondence in the
1,075 CRU emails that were reviewed did not reveal research or scientific
misconduct. Appellants do not counter any of these reports with other
investigations into the CRU emails that reach a contrary conclusion about Dr.
Mann’s integrity.” As this post makes clear, there is no evidence that any of
the four investigations thoroughly examined Mann. Thus, the Court should not
rely on those investigations. Additionally, even if there were thorough
investigations, they do not have to be rebutted by other institutional
investigations. For instance, if McIntyre’s criticisms, set forth in Sec. 4 of
this post are true, it does not matter what the reports referenced by the Court
stated.

Conclusion

A true exoneration of someone accused of misconduct would involve transparent,
thorough exchanges between the supporters and opponents of the accused. Then, at
the conclusion of that process, there would be clear, verifiable proof that the
charges were incorrect. That did not occur with respect to Mr. Mann.

The recent mistakes made in the investigation of Larry Nassar, a Michigan State
and USA Gymnastics physician, illustrate the problems in relying on one-sided
and superficial reports. Michigan State began receiving reports of sexual abuse
in 1997, and it was not until 2016 that the reports were finally given credence.
Patrick Fitzgerald, a nationally known Federal Prosecutor, was hired to
investigate the claims of sexual abuse in 2014. Later, in 2017, he was asked
about his work and Fitzgerald stated:

> “his law firm and another were retained by MSU, in part, “to review the
> underlying facts and disclose any evidence that others knowingly assisted or
> concealed” Nassar’s criminal conduct.
> 
> “Had we found such conduct, we would have reported such evidence to law
> enforcement promptly. And much as there is no ‘investigative report,’ there is
> no document that constitutes ‘Fitzgerald findings.’ ”
> http://www.detroitnews.com/story/news/local/michigan/2017/12/08/msu-larry-nassar-investigation/108437686/

In light of the numerous cases of sexual abuse that came to light, it is clear
that Fitzgerald, notwithstanding, his, to that point, sterling national
reputation, had done a poor job in his work for Michigan State. In much the same
way, even though there are a number of reports that purport to exonerate Mann, a
reasonably close look at the reports reveals that they are superficial and
couldn’t possibly exonerate Mann from charges of misconduct. Further, some of
the investigations that Mann claimed exonerated him did not even focus on his
work.

JD Ohio

END OF POST

Explanatory Notes

1. I originally proposed this post to Lucia, and she agreed to host it. Steve
McIntyre provided some links and materials to me, so I offered to cross-post at
his site, and he accepted. So, I am posting at both sites.

2. Popehat also criticized the exoneration portion of the Court’s opinion. See
https://www.popehat.com/2017/01/04/dc-appellate-court-hands-michael-mann-a-partial-victory-on-climate-change-libel-case/

3. I actually have the PSU reports, but I can’t find a working link. If someone
has a link to their reports, it will help. In the post, I linked to McKitrick’s
article on the Climategate investigations, which gives a good summary.

***4.  From Steve McIntyre here is his summary of Mike’s Nature Trick.

Mike’s Nature Trick was totally different from his subsequent talking-points, in
which he claimed that his “trick” was to show actual data (e.g. instrumental
temperature) and estimates (e.g. proxy reconstruction) in the same figure.
However, such figures have been commonplace since the start of statistics. The
technique was not invented by Mann nor is it a “trick”.

Mann’s Nature Trick was a sly method of creating an uptick in the smoothed proxy
reconstruction, a topic of ongoing interest to Mann. Smoothed graphs require
data after the end point. Mann spliced his unsmoothed proxy reconstruction up to
1980 with actual temperature data from 1981 to 1995 (MBH98), then later
1998(MBH99), prior to smoothing with a Butterworth filter of 50 years in MBH98
(MBH99- 40 years) with additional padding from average instrumental data. All
values of the smooth subsequent to the end of the proxy period were then
discarded.  (For more detail, see here; the topic was originally diagnosed in
2007 by CA reader UC here and expounded in greater length with Matlab
code here.)

The effect of the trick was to remove an inconvenient downturn at the end of the
smoothed series, which resulted using Mann’s smoothing method without splicing
instrumental data. Mann’s “trick” was first noticed at Climate Audit in 2007,
long before the Climategate email. UC and others sardonically contrasted Mann’s
actual technique with his loud proclamations at realclimate that no climate
scientist had ever spliced instrumental and proxy data in a reconstruction.

Despite Jones’ email, the technique described in Jones’ email varied somewhat
from the technique in Mann’s 1998 article. Like Mann’s 1998 article, Jones
combined the proxy reconstruction and instrumental data to construct the
smoothed series shown in the WMO 1999 report. But whereas Mann had cut the
smoothed reconstruction back to the end of the proxy reconstruction, Jones
instead showed the single merged series, a glaring rebuttal of Mann’s strident
claim that:

> No researchers in this field have ever, to our knowledge, “grafted the
> thermometer record onto” any reconstruction. It is somewhat disappointing to
> find this specious claim (which we usually find originating from
> industry-funded climate disinformation websites) appearing in this forum.

5.  I am fairly busy now and may be slow responding to comments.

***As to the reports actually relied upon by the Court, it is confusing. There
is one reference to an EPA report in passing, but it is never discussed in
detail. There are detailed discussions of the House of Commons Report (roughly
85% of it discussed the Muir Russell Report and the Oxburgh Report) Even more
confusing, is that the Court never specifically discussed the Oxburgh Report. In
any event, for my purposes, I will consider the the four reports referenced by
the Court requiring some substantive discussion to be, the Muir Russell Report,
the Oxburg Report, the Penn State Reports (two different reports were made to
Penn State)

*************

By jddohio | Posted in Inquiries, Uncategorized | Tagged inquiry, libel, muir
russell, nsf, nsf oig, oxburgh, penn state, steyn, wahl | Comments (20)
« Older posts



 * TIP JAR
   
   
   (The Tip Jar is working again, via a temporary location)


 * PAGES
   
   * About
   * Blog Rules and Road Map
   * CA Assistant
   * CA blog setup
   * Contact Steve Mc
   * Econometric References
   * FAQ 2005
   * Gridded Data
   * High-Resolution Ocean Sediments
   * Hockey Stick Studies
   * Proxy Data
   * Station Data
   * Statistics and R
   * Subscribe to CA
   * Tip Jar


 * CATEGORIES
   
   Categories Select Category AIT Archiving    Nature    Science climategate
      cg2 Data Disclosure and Diligence    Peer Review FOIA General Holocene
   Optimum Hurricane Inquiries    Muir Russell IPCC    ar5 MBH98    Replication
      Source Code    Spot the Hockey Stick! Modeling    Hansen    Santer    UK
   Met Office Multiproxy Studies    Briffa    Crowley    D'Arrigo 2006    Esper
   et al 2002    Hansen    Hegerl 2006    Jones & Mann 2003    Jones et al 1998
      Juckes et al 2006    Kaufman 2009    Loehle 2007    Loehle 2008    Mann et
   al 2007    Mann et al 2008    Mann et al 2009    Marcott 2013    Moberg
   [2005]    pages2k    Trouet 2009    Wahl and Ammann News and Commentary    MM
   Proxies    Almagre    Antarctica    bristlecones    Divergence    Geological
      Ice core    Jacoby    Mann PC1    Medieval    Noamer Treeline    Ocean
   sediment    Post-1980 Proxies    Solar    Speleothem    Thompson    Yamal and
   Urals Reports    Barton Committee    NAS Panel Satellite and gridcell Scripts
   Sea Ice Sea Level Rise Statistics    chladni    Multivariate    RegEM
      Spurious Steig at al 2009 Surface Record    CRU    GISTEMP       GISTEMP
   Replication    Jones et al 1990    SST    Steig at al 2009    UHI TGGWS
   Uncategorized Unthreaded


 * ARTICLES
   
   * CCSP Workshop Nov05
   * McIntyre/McKitrick 2003
   * MM05 (GRL)
   * MM05(EE)
   * NAS Panel
   * Reply to Huybers
   * Reply to von Storch


 * BLOGROLL
   
   * Accuweather Blogs
   * Andrew Revkin
   * Anthony Watts
   * Bishop Hill
   * Bob Tisdale
   * Dan Hughes
   * David Stockwell
   * Icecap
   * Idsos
   * James Annan
   * Jeff Id
   * Josh Halpern
   * Judith Curry
   * Keith Kloor
   * Klimazweibel
   * Lubos Motl
   * Lucia's Blackboard
   * Matt Briggs
   * NASA GISS
   * Nature Blogs
   * RealClimate
   * Roger Pielke Jr
   * Roger Pielke Sr
   * Roman M
   * Science of Doom
   * Tamino
   * Warwick Hughes
   * Watts Up With That
   * William Connolley
   * WordPress.com
   * World Climate Report


 * FAVORITE POSTS
   
   * Bring the Proxies up to date
   * Due Diligence
   * FAQ 2005
   * McKitrick: What is the Hockey Stick debate about?
   * Overview
   * Responses to MBH
   * Some thoughts on Disclosure
   * Wegman and North Reports for Newbies


 * LINKS
   
   * Acronyms
   * Latex Symbols
   * MBH 98
   * Steve’s Public Data Archive
   * WDCP
   * Wegman Reply to Stupak
   * Wegman Report


 * WEBLOGS AND RESOURCES
   
   * Ross McKitrick
   * Surface Stations


 * ARCHIVES
   
   Archives Select Month December 2023 November 2023 October 2023 November 2021
   October 2021 September 2021 August 2021 March 2021 July 2020 March 2020
   October 2019 July 2019 February 2019 October 2018 July 2018 May 2018 April
   2018 March 2018 February 2018 December 2017 November 2017 October 2017
   September 2017 July 2017 June 2017 May 2017 November 2016 August 2016 July
   2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016
   December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March
   2015 February 2015 January 2015 December 2014 November 2014 October 2014
   September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014
   February 2014 January 2014 December 2013 November 2013 October 2013 September
   2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January
   2013 December 2012 November 2012 October 2012 September 2012 August 2012 July
   2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012
   December 2011 November 2011 October 2011 September 2011 August 2011 July 2011
   June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December
   2010 November 2010 October 2010 September 2010 August 2010 July 2010 June
   2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009
   November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May
   2009 April 2009 March 2009 February 2009 January 2009 December 2008 November
   2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008
   April 2008 March 2008 February 2008 January 2008 December 2007 November 2007
   October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April
   2007 March 2007 February 2007 January 2007 December 2006 November 2006
   October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 April
   2006 March 2006 February 2006 January 2006 December 2005 November 2005
   October 2005 September 2005 August 2005 July 2005 June 2005 May 2005 April
   2005 March 2005 February 2005 January 2005 December 2004 October 2004 January
   2000


 * NOTICE
   
   Click on the "Reply" link to respond to a comment.
   Frequent visitors will want the CA Assistant. Sort/hide comments; improved
   reply box, etc.


 * SEARCH
   
   


 * BLOG STATS
   
   * 18,736,263 hits since 2010-Sep-12
 * 
 * 


 * TWITTER UPDATES
   
   
   Tweets by ClimateAudit


 * RECENT POSTS
   
   * D’arrigo et al 2006: NWNA Alaska
   * Sheenjek, Alaska: A Jacoby-MBH Series
   * Four Twelve, Alaska: A Jacoby Series
   * Discovery of Data for One of the “Other 26” Jacoby Series
   * MBH98 Weights – an Update


 * RECENT COMMENTS
   
   * jddohio on JDOhio: Was Michael Mann Exonerated by the Post-Climategate
     Investigations as Was Decided by the DC Court of Appeals?
   * joethenonclimatescientist on JDOhio: Was Michael Mann Exonerated by the
     Post-Climategate Investigations as Was Decided by the DC Court of Appeals?
   * Stephen McIntyre on JDOhio: Was Michael Mann Exonerated by the
     Post-Climategate Investigations as Was Decided by the DC Court of Appeals?
   * IPCC’s New “Hockey Stick” Temperature Graph - News7g on Discovery of Data
     for One of the “Other 26” Jacoby Series
   * IPCC’s New “Hockey Stick” Temperature Graph - Climate- Science.press on
     Discovery of Data for One of the “Other 26” Jacoby Series
   * IPCC’s New “Hockey Stick” Temperature Graph | Climate Etc. on Discovery of
     Data for One of the “Other 26” Jacoby Series
   * Weekly Climate and Energy News Roundup #588 – Watts Up With That? on
     Discovery of Data for One of the “Other 26” Jacoby Series
   * Gaza and the Local weather Hoax, by Kevin MacDonald - VH2 Networks on
     Discovery of Data for One of the “Other 26” Jacoby Series
   * Gaza and the Local weather Hoax, by Kevin MacDonald - VH2 Networks on
     Discovery of Data for One of the “Other 26” Jacoby Series
   * jddohio on JDOhio: Was Michael Mann Exonerated by the Post-Climategate
     Investigations as Was Decided by the DC Court of Appeals?
   * Les deux gros mensonges de Nico (le Parisien): l’efficacité des mesures
     covid et la courbe de hockey de Mann sont toujours contestées et il fait
     croire le contraire | LE BLOG DE PATRICE GIBERTIE on Discovery of Data for
     One of the “Other 26” Jacoby Series
   * Will Nitschke on D’arrigo et al 2006: NWNA Alaska
   * Trial Of Mann v. Steyn, Part V: Jury Instructions And Closing Argument
     Francis Menton | RUTHFULLY YOURS on JDOhio: Was Michael Mann Exonerated by
     the Post-Climategate Investigations as Was Decided by the DC Court
     of Appeals?
   * Sam on D’arrigo et al 2006: NWNA Alaska
   * Ron Graf on JDOhio: Was Michael Mann Exonerated by the Post-Climategate
     Investigations as Was Decided by the DC Court of Appeals?


 * META
   
   * Register
   * Log in
   * Entries feed
   * Comments feed
   * WordPress.com
 * 
 * 

Create a free website or blog at WordPress.com. |

 * Subscribe Subscribed
    * Climate Audit
      
      Join 3,687 other subscribers
      
      Sign me up
    * Already have a WordPress.com account? Log in now.

 *  * Climate Audit
    * Customize
    * Subscribe Subscribed
    * Sign up
    * Log in
    * Report this content
    * View site in Reader
    * Manage subscriptions
    * Collapse this bar