Why Yamal Matters Part III
Posted by Jeff Id on November 11, 2009
Guest post by John Pittman
Why Yamal Matters Part III
We start with the summary at p683 Chapter 9 Attributing Climate Change.
P 683 The Summary “”When driven with estimates of external forcing for the last millennium, AOGCMs simulate changes in hemispheric mean temperature that are in broad agreement with proxy reconstructions (given their uncertainties), increasing confidence in the forcing reconstructions, proxy climate reconstructions and models. In addition, the residual variability in the proxy climate reconstructions that is not explained by forcing is broadly consistent with AOGCM-simulated internal variability. Overall, the information on temperature change over the last millennium is broadly consistent with the understanding of climate change in the instrumental era.””
http://www.cafepress.com/luciablackboard shows another reason that these reconstruction issues matter. I like Lucia’s graph. It is straight forward and easy to understand. It also makes the next point easy to make. First we are going to state what the skeptics are talking about. In simple terms if the reconstructions are wrong, or the confidence intervals too wide such that the MWP could be and was warmer, we would expect to see what Lucia’s graph shows. Even though, they claim and it was true at the time,
AOGCMs simulate changes in hemispheric mean temperature that are in broad agreement with proxy reconstructions (given their uncertainties), increasing confidence in the forcing reconstructions, proxy climate reconstructions and models.
That was then, Lucia’s is now. At this point we can say AOGCM’s are not in broad agreement with the hemispheric mean temperatures presently experienced, and thus decreasing our confidence in the forcing reconstructions, proxy climate reconstructions and models.
But the AGW are correct in pointing out it is just barely out; and it is too short of time to draw conclusions yet.
But wait, the IPCC linked all of these together which means now we have less confidence in proxy climate reconstructions, based on the present or near falsification at 95% levels of the AOGCMs, and work by Steve McIntyre, Jean S, UC, JeffID, etc, which reduces confidence in the forcing reconstructions. If we have less confidence in the AOGCM, by a measurement independent of the proxy reconstructions, and less confidence in the proxy reconstructions independent of the AOGCMs results at backcasting, then we have much less confidence in the forcing reconstructions. Those projections that trillions of dollars to be spent, and the present direction the IPCC recommendations depend on, are now much weakened.
But wait, that’s not all. (I feel like I am in a TV commercial). There is reason to suspect that the forcing reconstructions have problems as well. This was from Spencer on WUWT “”The IPCC has admitted as much on p. 640 of the IPCC AR4 report, at the end of section 8.6, which is entitled “Climate Sensitivity and Feedbacks”:
“A number of diagnostic tests have been proposed…but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections (of warming). Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.” “”
But they state later on p683.
“”When driven with estimates of external forcing for the last millennium, AOGCMs simulate changes in hemispheric mean temperature that are in broad agreement with proxy reconstructions (given their uncertainties), increasing confidence in the forcing reconstructions, proxy climate reconstructions and models. In addition, the residual variability in the proxy climate reconstructions that is not explained by forcing is broadly consistent with AOGCM-simulated internal variability.
If we don’t have the sensitivity right, and plausible climate change feedbacks, just what do we have on p683? We have, first, confidence from backcasting. That is the first sentence. The confidence for forcing is based on the reconstructions. But the IPCC had not developed a set of model metrics which test their estimates, how did they justify assumptions and estimates. That is the second sentence the residual variability in the proxy climate reconstructions that is not explained by forcing is broadly consistent with AOGCM-simulated internal variability; also, backcasting.
AR4 P600 Chapter 8
There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.
Models’ ability to represent these and other important climate features increases our confidence that they represent the essential physical processes important for the simulation of future climate change. (Note that the limitations in climate models’ ability to forecast weather beyond a few days do not limit their ability to predict long-term climate changes, as these are very different types of prediction–
AR4 P601 Chapter 8
A third source of confidence comes from the ability of models to reproduce features of past climates and climate changes. Models have been used to simulate ancient climates, such as the warm mid-Holocene of 6,000 years ago or the last glacial maximum of 21,000 years ago (see Chapter 6). They can reproduce many features (allowing for uncertainties in reconstructing past climates) such as the magnitude and broad-scale pattern of oceanic cooling during the last ice age. Models can also simulate many observed aspects of climate change over the instrumental record. One example is that the global temperature trend over the past century (shown in Figure 1) can be modeled with high skill when both human and natural factors that influence climate are included. Models also reproduce other observed changes, such as the faster increase in nighttime than in daytime temperatures, the larger degree of warming in the Arctic and the small, short-term global cooling (and subsequent recovery) which has followed major volcanic eruptions, such as that of Mt. Pinatubo in 1991 (see FAQ 8.1, Figure 1). Model global temperature projections made over the last two decades have also been in overall agreement with subsequent observations over that period (Chapter 1).
http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html shows that CO2 is increasing along the lines of unabated fossil fuel use.
AR4 P466 Chapter 6
The first (Mann et al., 1999) represents mean annual temperatures, and is based on a range of proxy types, including data extracted from tree rings, ice cores and documentary sources; this reconstruction also incorporates a number of instrumental (temperature and precipitation) records from the 18th century onwards. For 900 years, this series exhibits multi-decadal fluctuations with amplitudes up to 0.3°C superimposed on a negative trend of 0.15°C, followed by an abrupt warming (~0.4°C) matching that observed in the instrumental data during the first half of the 20th century.
The recent cooling does seem to be almost in line with AR4 for historical data. A quick estimate, using the recent average of –0.115 C/decade (-0.013, -0.010 C/yr from Lucia’s rank exploits) minus the .2 C/decade that the recent CO2 emissions should have caused, indicates that the recent decline of -.315 C/decade is just barely outside the expected. However, we have some years left before it is a decade. So, the AGW proponents who say that recent cooling, flattening is consistent with the presentation by AR4 are correct. With respect to Yamal, if there is increased variability, the models are more likely to be within the range. However, the claim that the later part of the 20th century can only be explained by AGW becomes more suspect.
Another reason Yamal is important is what happens in the next 2 to 3 years, or perhaps 20 years. At present, the models which underpin the IPCC presentation are just barely wrong, but the time frame is wrong. However, by cherry picking one can go to year 1998 and show that the forecasting suffers from the same error as the backcasting, if a corrected Yamal gives a warmer MWP than indicated in the spaghetti graph. And there is a humorous reason to use 1998 and claim that the AR4 is in real trouble. The claim that 1998 was the warmest year, in the warmest, etc. is the criteria for selecting the year.
At this point 1998, using the same Bayesian a priori, and assuming Yamal’s “hockey stick” is an artifact, the backcasting is bad; the forecasting is bad; and the latter part of the 20th warming claim is suspect. Therefore, it is very likely the AR4 is wrong, and overestimates the warming. But this is just humorous cherry picking.
So, to have an honest evaluation, MWP has to be shown as warm as, or warmer than CWP, or the temperatures must drop substantially more in a few years without major vulcanism, or the temperatures stay flat for about 15 years, at a minimum. The recent citing of a bi-modal growth pattern for species of trees used in the proxies is of special interest. One possible conclusion is that CWP and the Holocene Optimum are more closely related in temperature that CWP and MWP, supporting the IPCC AR4. Such a temperature driven bi-modal response could also show that CWP and MWP are closely related falsifying the “spaghetti graph,” if further Yamal coring of sub-fossil trees show the bi-modal pattern around 1000 AD. It would be ironic that a site chosen to represent the CWP, should also, after investigation and explanation, show the MWP as warm or warmer than the CWP.