Roy Spencer, Missing Heat

This is a very interesting post by Dr. Roy Spencer on the failure of the ocean to warm according to models, more misleading by the IPCC of that failure, and a simple powerful demonstration which shows where models likely go wrong. It definitely deserves more attention. The result – a greatly reduced sensitivity of climate to CO2. Evey day the evidence mounts that model warming is overstated. Climate science and main stream media like to lump skeptics together because we don’t know X or we always say Y. There are strong reasons to be skeptical of global warming alarmism. It is impossible to cover every point with a short list, but some of my favorites include; The multi-model mean running 2-4X observed temperature trend1, The missing hot spot and The missing ocean heat.  Roy’s post is definitely some of the strongest evidence I’ve read that the heat from climate feedback to CO2 just isn’t there.

I’m surprised the center of the internet WUWT hasn’t run it yet. Maybe they have now, I haven’t been over there yet today. Jeff

More Evidence that Global Warming is a False Alarm: A Model Simulation of the last 40 Years of Deep Ocean Warming

June 25th, 2011

NASA’s James Hansen is probably right about this point: the importance of ocean heat storage to a better understanding of how sensitive the climate system is to our greenhouse gas emissions. The more efficient the oceans are at storing excess heat during warming, the slower will be the surface temperature response of the climate system to an imposed energy imbalance.

Unfortunately, the uncertainties over the rate at which vertical mixing takes place in the ocean allows climate modelers to dismiss a lack of recent warming by simply asserting that the deep oceans must somehow be absorbing the extra heat. Think Trenberth’s “missing heat“. (For a discussion of the complex processes involved in ocean mixing see here.)

Well, maybe what is really missing is the IPCC’s willingness to admit the climate system is simply not as sensitive to our greenhouse gas emissions as they claim it is. Maybe the missing heat is missing because it does not really exist.


click here to see the rest of the post.

69 thoughts on “Roy Spencer, Missing Heat

  1. “I’m surprised the center of the internet WUWT hasn’t run it yet”

    1. Anthony is off on a jolly, and has announced that posting will be slow for a while

    2. Is this post a pre-echo?

  2. I think this all gets back to ignoring the secondary cooling effect of GHGs. By only focusing on the warming side you will always see a higher forcing number than exists in reality. Until the climate scientists acknowledge the entire effect of GHGs we will continue to see them get the wrong answers.

  3. I am continually driven mad by one fact. When performing calculations based on the math of combustion engineering, the radiative absorption of enthalpy by CO2 increases logarithmically for a period, but eventually reaches a maximum, after which there is no further absorption, no matter how much CO2 you add. For reasons beyond my comprehension, the atmosphere as modeled by climate science is different from the atmosphere as modeled by combustion engineering. I’ve yet to see a rational explanation of why this is so. And in combustion engineering, we need to design things that work.

  4. John Eggert,

    Yes, once all light at 14 microns is absorbed, you can’t absorb any more. But the real issues are twofold:

    1) The absorption band for CO2 broadens slightly with rising concentration, meaning that infrared light which would otherwise pass through the atmospheric emissions window (~8 to ~13 .5 microns… is slightly restricted by the narrowed window, reducing total radiative heat loss from the surface to space, and

    2) High in the atmosphere (upper troposphere), where there is not enough mass of CO2 above to absorb 100% of the light in the 14 micron band, a rise in CO2 does indeed reduce the loss of energy to space, so the altitude where the atmosphere is sufficiently thin to allow emission at 14 microns rises…. and that means a colder emission temperature, and less heat loss. The “physical height” of the portion of the atmosphere that is effectively opaque at 14 microns increases with rising CO2.

    The simple stuff is pretty much right; doubling CO2, in the absence of any feed-backs, ought to warm the surface by about 1C – 1.2C. It is the net magnitude of feed-backs that is the real issue. The shortfall in measured ocean heat (compared to climate models) is indicative of errors in the climate models, which cause those models to substantially overstate climate sensitivity to radiative forcing by CO2 and other GHG’s.

  5. So…. does that mean that the demonstration that simple lower-order system models imply a lower sensitivity than higher-order solutions does not matter? Is the headline “More Evidence that Global Warming is a False Alarm” justified?

  6. RB,

    I haven’t seen the math yet but like the simple linear models that match global models or PaulK’s examples at Lucia’s, I found the evidence to be quite strongly in favor of model bias and missing energy. I’ve been wrong plenty of times before though.

  7. #5,
    I am not sure if you are asking me, but if so:

    No, the headline certainly is somewhat justified, since the lower than expected heat accumulation implies lower net sensitivity. If it were me, I would probably stay away form the words “false alarm”. If the true climate sensitivity is in fact 1.2 – 1.7 C per doubling (and I think there are lots of reasons to believe the correct value lies in that range), then increasing atmospheric CO2 to 1,000 PPM (which seems a reasonable upper range of CO2 for total consumption of economically recoverable fossil fuels), then warming might ultimately (say 150 years out) reach 2+C warmer than today, or perhaps 3+C warmer than than 1850. It is not clear if that much additional warming would be on balance very bad, since there would likely be both positive and negative effects. But it is enough of a change to warrant some level of concern, and certainly to justify additional effort to a) verify the true climate sensitivity and to b) rigorously evaluate the effects (both positive and negative) of additional warming. Hysterical projections of multi-meter sea level rises by 2100 ought not be considered, since they are not plausible. Multi-meter rises over a thousand years probably should be considered, since they are at least plausible.

  8. Steve #7,
    Here follows my possibly mistaken understanding. Like Roy Spencer, PaulK also showed that a simple first-order solution yielded 1.3C sensitivity. I thought PaulK’s results showed that constraint in only one direction (surface temperatures) yields multiple solutions that also do not preclude IPCC-type higher sensitivity numbers. Also, that simpler lower-order models yield lower sensitivity numbers. Did Roy Spencer demonstrate something contrary to those results from PaulK?

  9. RB #8,

    My understanding of Paul K’s results is that they were consistent with a considerable range of climate sensitivities, including the IPCC numbers. All you have to do is assume a different combination of aerosol forcing and ocean heat uptake and you can credibly match any sensitivity. His model was a two-layer model, with the second layer 500 meters in depth and uniform in temperature. That is not nearly so good a model as what Roy Spencer has used, and he has matched the measured profile of ocean temperature change. That is a much stronger constraint.

  10. I agree with you guys. Paul showed that fitting the historic data can be accomplished with very low order models to high ones. All the sensitivity ranges were possible but the point was that hind casting was no proof of future results. Roy Spencer matched the observations of ocean depths which he could only accomplish through reduced feedback/diffusion assumptions. This is impressive in that if the heat captured by CO2 didn’t cause enough warmer water to match the W/m^2 incoming+backradiated, it must have escaped to space IOW – not captured. The energy imbalance predicted by models doesn’t seem to exist at the magnitudes they predict.

    Very strong stuff IMHO.

  11. Jeff #10
    Spencer demonstrates a fit up to 700m. Trenberth was concerned about lack of data at depths of 1500m and greater. This too still seems to be a contentious point.

    Isaac Held on simple models:

    If we compute the forcing due to doubling of CO2 with the same method that we use to compute \mathcal{F}(t) above, we get 3.5 W/m2, so the response to doubling using this value of \lambda would be roughly 1.5 K. However, if we double the CO2 in the CM2.1 model and integrate long enough so that it approaches its new equilibrium, we find that the global mean surface warming is close to 3.4 K.. Evidently, the simple one-box model fit to CM2.1 does not work on the time scales required for full equilibration. Heat is taken up by the deep ocean during this transient phase, and the effects of this heat uptake are reflected in the value of \lambda in the one-box fit. Longer time scales, involving a lot more that 70 meters of ocean, come into play as the heat uptake saturates and the model equilibrates. I will be discussing this issue in the next few posts.

  12. I have a little bit of a different take. Rather than evidence that the climate is insensitive per se this result shows that the observations cannot rule it out, and indeed with mainstream forcing assumptions, fairly low sensitivity does seem to be preferred. Unfortunately the forcings are so adjustable that this preference can be rather easily “rectified” by modelers who will just enhance their aerosol cooling effects. Sadly this makes the last few decades a very weak constraint on climate sensitivity, indeed modelers have been very weakly constrained by all potential “tests” of sensitivity, which is why they can plausibly say that anywhere from 2 to 5 degrees for doubling CO2 is “consistent” with the evidence. Some of the more alarmist individuals have even said that the tail of the distribution towards high sensitivities isn’t fat enough! However values in excess of 5 for a doubling are so physically absurd that nobody knowledgeable would ever believe them.

    I currently working on something about the “constraints” modelers frequently refer to as eliminating low estimates. In my view there are many reasons why their “tests” are not sufficiently analogous to the situation with a doubling of CO2 to constitute “tests” of their sensitivity.

  13. TTCA,

    How is it possible that the climate has a 3C/doubling yet the air and ocean haven’t warmed enough to show the expected result from current CO2 increases. Where did the stated heat energy go? That is why this is so strong a line. If the heat is supposed to be in the oceans, and we can’t find it, it must be either somewhere else or nowhere at all.

  14. I haven’t been following this issue slowly, but there seem to be the odd news release or two lying around:

    “Previous studies have shown that the upper ocean is warming, but our analysis determines how much additional heat the deep ocean is storing from warming observed all the way to the ocean floor,” said Sarah Purkey, an oceanographer at the University of Washington and lead author of the study.

    This study shows that the deep ocean – below about 3,300 feet – is taking up about 16 percent of what the upper ocean is absorbing. The authors note that there are several possible causes for this deep warming: a shift in Southern Ocean winds, a change in the density of what is called Antarctic Bottom Water, or how quickly that bottom water is formed near the Antarctic, where it sinks to fill the deepest, coldest portions of the ocean around much of the globe.

  15. From the article noted by RB with the title listed below, I found the comment also listed below that gets to the crux of the claim. The spatial coverage has to be sparse and if, as the comment notes the spatial variation is relatively large one would have large uncertainties in calculating an average global temperature change in the deep ocean. I did not see any calculations concerning these uncertainties after a brief run through the article. I suspect that the intended message was more about a qualitative find of a warming at depths at some locations in the ocean(s). They do note geothermal as a potential source but I have not read the article well enough to comment on that point.

    “Warming of Global Abyssal and Deep Southern Ocean Waters between the 1990s and 2000s: Contributions to Global Heat and Sea Level Rise Budgets”

    “To gain more precise estimates of the deep ocean’s contribution to sea level and global energy budgets, and to understand better how the deep and abyssal warming signals spread from the Southern Ocean around the globe, higher spatial and temporal resolution sampling of the deep ocean is required. The basin space-scale and decadal time-scale resolution of the data used here could be aliased by smaller spatial scales and shorter temporal scales. Furthermore, the propagation of the signal can only be conjectured, not confirmed, with the present observing system.

    In summary,we show that the abyssal ocean has warmed significantly from the 1990s to the 2000s (Table 1). This warming does not occur uniformly around the globe but is amplified to the south and fades to the north (Fig. 8). Both Indian and Atlantic Oceans only warm on one side, with statistically insignificant cooling on their other side.”;content

  16. Ken,
    Thanks for the link. The energy computed by this group from abyssal waters looks to be nowhere near enough to identify the “missing heat” which I suppose could be from deep waters, measurement errors or could indeed be missing. It looks like the article only addresses a portion of the deep waters beyond 700m.

    Here we make quantitative global estimates of recent (1990s to 2000s) deep and abyssal ocean warming,mostly within or originating from the Southern Ocean. We use repeat hydrographic section data to quantify temperature trends in two regions of the world’s oceans: the global abyssal ocean, defined here as.4000 m in all deep basins (excluding the Arctic Ocean and Nordic seas), and the deep SouthernOcean, defined here as the region between 1000 and 4000 m south of the Subantarctic Front (SAF).

    Trenberth and Fasullo identified the enrgy buildup to be 1W/m^2. The authors say this:
    The heating reported here is a statistically significant fraction of previously reported upper-ocean heat uptake. The upper 3000 mof the global ocean has been estimated to warm at a rate equivalent to a heat flux of 0.20 W m22 applied over the entire surface of the earth between 1955 and 1998 with most of that warming contained in the upper 700 m of the water column (Levitus et al. 2005). From 1993 to 2008 the warming of the upper 700 mof the global ocean has been reported as equivalent to a heat flux of 0.64 (60.11) W m22 applied over the earth’s surface area (Lyman et al. 2010). Here, we showed the heat uptake byAABWcontributes about another 0.10 W m22 to the global heat budget. Thus, including the global abyssal ocean and deep Southern Ocean in the global heat budget could increase the estimated ocean heat uptake over the last decade or so by roughly 16%. Considering the ocean between 700 m and the upper limits of our control volumes could add more heat (von Schuckmann et al. 2009; Levitus et al. 2005), reducing the percentage of the contribution computed here somewhat.

  17. Off-topic, from the annals of history on atmospheric water-vapor, land use and UHI :

    Man cannot at his pleasure command the rain and the sunshine, the wind and frost and snow, yet it is certain that climate itself has in many instances been gradually changed and ameliorated or deteriorated by human action. The draining of swamps and the clearing of forests perceptibly effect the evaporation from the earth, and of course the mean quantity of moisture suspended in the air. The same causes modify the electrical condition of the atmosphere and the power of the surface to reflect, absorb and radiate the rays of the sun, and consequently influence the distribution of light and heat, and the force and direction of the winds. Within narrow limits too, domestic fires and artificial structures create and diffuse increased warmth, to an extent that may effect vegetation. The mean temperature of London is a degree or two higher than that of the surrounding country, and Pallas believed, that the climate of even so thinly a peopled country as Russia was sensibly modified by similar causes.

    Republican George Perkins Marsh, 1847.

  18. 13-The strategy of dealing with it I am talking about says that the heat was never there to begin with, because dust and particulates in the air reflected solar radiation out to space, canceling much of the greenhouse warming. The problem is that these effects are already accounted for in the forcings Roy uses…but the aerosol forcing is very uncertain, so one can plausibly claim that the aerosol forcing just isn’t strong enough…although it is just as plausible that the aerosol forcing from GISS is too large, and the sensitivity is even lower. Personally I find arguments from uncertainty without specific justifications for why that uncertainty should push reality in only one direction to be very weak. But in climate the weakest arguments are invariably preferred to stronger ones…

  19. Steve Fitzpatrick,

    “2) High in the atmosphere (upper troposphere), where there is not enough mass of CO2 above to absorb 100% of the light in the 14 micron band, a rise in CO2 does indeed reduce the loss of energy to space, so the altitude where the atmosphere is sufficiently thin to allow emission at 14 microns rises…. and that means a colder emission temperature, and less heat loss. The “physical height” of the portion of the atmosphere that is effectively opaque at 14 microns increases with rising CO2.”

    OK, this is what always confused me about AGW. This is the hot spot. When we get more CO2 we also get more warming (alledgedly). The warming raises the tropopause. We are told by the modelers what you are stating, that the warming and increased CO2 is offset by the cooler temp based on the higher average altitude of emission. Yet, the tropopause will rise due to a warmer troposphere meaning that the larger amount of CO2 in the strat will be in a larger volume and so will the upper trop and the temps will be higher at these altitudes than before the warming. In my fevered imagination it balances out. We start with a warmer temp at the ground and the lapse rate says we are at a higher altitude for the same emissions temperatures. The extra CO2 is balanced by the larger volume and higher temps at higher altitudes. Unless the lapse rate somehow increases????

  20. There is a branch of physics/chemistry named “spectroscopy”. It has a long and illustrious history, because the early observations led to the notion of duality of light and hence to quantum theory and hence to some of Einstein’s deductive work. Many of the great names participated from roughly the 1840s onwards. Here is a short tutorial from lessons taught to me in the 1960s, from memory.

    Sprctra can be emission types, or absorption types. Both involve a change in the energy of an atom or molecule. The simplest case in theory, the spectrum of hydrogen, was coarsely investigated by Balmer in 1885 and in more detail by Rydberg, then more by Lyman, then more by Paschen. There are many, many energies at which transitions can take place in hydrogen.

    Molecules are a much more complicated case than atoms. Properties like asymmetry can lead to modes such as rotational, their flexibility to bending and stretching and vibration.

    Let’s consider the gas CO2. There are some fundamental rules. A molecule can be excited only if an incoming photon (or other allowed source of energy) exceeds a threshhold. A molecule already excited can be taken to an even higher state or even several successively higher states. These states are defined, so if photons are emitted as a molecule returns to a lower energy state, they produce discrete spectral lines rather than a continuum. Some energy transitions are allowed, some are forbidden under quantum theory. A photon emitted by one molecule can be captured by another if it has the appropriate energy. In the low density air from the tropopause upwards, CO2 can travel large distances between absorption and emission. It does not behave as it would in a cell in a spectrometry laboratory. There is even a semantic problem in assigning it a temperature.

    There is a natural drive for excited molecules to shed energy, often as photons, to gain low energy stable states, the lowest of which is the ground state. The average low state in a mix of molecules is temperature and pressure dependent.

    If you have not studied the complexities within this brief approximate set of sentences, then you should not be making statements about effects like line broadening and saturation. (I’m not picking on those above).Theconcepts get quite involved, quite quickly. Jeff, it would ne neat to get a more modern spectroscopist to do a post about CO2 absorption and emission. Vent – every time I did a search in the last hour, almost everything was related to GHG and climate. There was a more pure time in physics when you would get tables showing the energies (or their reciprocal, the wavelengths) of the measured and allowed transitions. In numbers, not in arm waving. That is what I was looking for to support these words. Modtran sets out to do this for the atmosphere, but there is room for refinement of factors of influence to make it more relevant.

    However, it might be largely academic, because a case can be made, based on ocean temperatures and photon behaviour at high altitudes, that CO2 is either a non-player or a bit player in explaining the climate as we understand it for many hundreds of years before present. The measured global temperatures for the last decade or more have a physical reason to plateau and that reason, when agreed upon by experts, will fascinate me. My bet is that it will not involve CO2, unless trivially.

    I thank Roy Spencer and others for helping shape that conclusion. I do not thank those who adjusted or lost early temperature records.

  21. According to (engineer) Dr Van Andel slide 26 here Trenberth has admitted that the radiation window is 66W/m2 not 40W/m2 as in his heat balance papers. That surely throws out the concept of missing heat. A number of clever engineers and scientists (including Van Andel here, and Miskcolczi- see the excellent paper by Van Andel in E&E vol21 no4 2010 p273 “Note on the Miskolczi theory) are saying that evaporation at the water surfaces (some 75% of the world) and condensation to clouds and precipitation is the major heat transfer mechanism in the atmosphere. The method of condensation to form clouds is still an unknown factor but it has been established that inceased clouds reduces incoming radiation from the sun and this leads to cooling.
    As an engineer with some experience in heat transfer I suggest that there has been an over emphasis of radiation and so-called sensitivity. My calculation, using the equation developed by Prof. Hoyt Hottel from actual measurents in heat exchangers, results in very much lower heat absorption by CO2 than that used by the IPCC.

  22. John Nicol wrote a detailed analysis which can be found (hopefully) at
    Prof. Nicol is Emeritus Prof. at James Cook Uni., Townsville Australia who enjoyed a long career in infra-red spectroscopy. Well worth reading if one has the physics and maths to follow – which sadly I don’t really but I can follow the analysis!

  23. #20,
    The “hot spot” is more related to a predicted increase in tropospheric moisture (which is a strong IR absorber), especially over tropical latitudes, and to a lesser extent over subtropical latitudes. It is an “amplification” issue, and not directly related to basic radiative transfer effects with changes in CO2. The apparent absence of a “hot spot” in a lot of credible data (including, of course tropospheric temperature data from satellites) suggests that the climate models have issues with accuracy of moisture transport/distribution in the troposphere.

    With regard to raising the height of the tropopause, that is an expected effect if there is any surface warming, regardless of cause. We can easily see a big difference in the height of the tropopause in cold regions compared to warm regions. ( ) The seasonal shift (higher in summer than in winter) of the tropopause at mid latitudes is also well known.

  24. #21,

    Let’s consider the gas CO2. There are some fundamental rules. A molecule can be excited only if an incoming photon (or other allowed source of energy) exceeds a threshhold. A molecule already excited can be taken to an even higher state or even several successively higher states. These states are defined, so if photons are emitted as a molecule returns to a lower energy state, they produce discrete spectral lines rather than a continuum. Some energy transitions are allowed, some are forbidden under quantum theory. A photon emitted by one molecule can be captured by another if it has the appropriate energy. In the low density air from the tropopause upwards, CO2 can travel large distances between absorption and emission. It does not behave as it would in a cell in a spectrometry laboratory. There is even a semantic problem in assigning it a temperature.

    I think you have some misconceptions. CO2 molecules can enter an excited state either by absorbing a photon of the appropriate wavelength (~14 microns) or due to impacts with other molecules in the air (absorption of other wavelengths, outside the absorption band, is “forbidden” as you note). CO2 can go from an excited state to the “ground state” either by loss of a photon or by transfer of energy to another molecule via physical collision, in which case the energy of the excited state is converted to translational energy (AKA sensible heat). If a CO2 molecule absorbs a photon, it will almost always lose energy via collision rather than re-radiation, unless the pressure is extremely low. The probability that a CO2 molecule emits a 14 micron photon during the time it is in an excited state may be low, but since CO2 molecules also can be “promoted” to an excited state via molecular collision, there is always a population of CO2 molecules which are in an excited state, and that population will continuously emit radiation at the characteristic absorption/emission band of 14 microns, with that continuous emission in random directions. Note that at higher sensible temperature the translational velocity of molecules is greater, so the chance of an impact promoting a CO2 molecule to an excited state increases with rising temperature. Which means that higher temperature always yields a greater population of CO2 molecules in an excited state, and a greater rate of emission of 14 micron radiation by CO2…. just as we would expect, since warmer materials emit more heat.

  25. 15-Steve, surely you meant to reference that to Kuhnkat’s comment not mine? 🙂 I think RB’s comment 18 appear from moderation and threw it all off.

    18-Very interesting! Goes to show that our underlying knowledge of the world around us has some deep history to it. However, I believe that the date should 1874 not 1847, for two reasons:

    First, in 1847 he would have been a Whig, not a Republican, as that party did not exist yet.

    Second, 1874 is the date of publication of Marsh’s book “The Earth as Modified by Human Action” and I figure it was fairly easy to type this date incorrectly as 1847.

  26. 26.steve fitzpatrick said June 28, 2011 at 8:38 am re spectra

    I can’t see where our comments are in conflict. For brevity, I left out many sub-topics, but those I left in are in agreement with your comment. I differ a little about ground state. Ground state in the quantum mechanical model is a non-zero energy state that is the lowest permitted energy state of a system, rather than a traditional classical system that is thought of as simply being at rest with zero kinetic energy. Just because a ground state exists, it does not follow that molecules reside there in nature as a default. But that’s nit picking.

  27. Steve, #25

    Actually, what you described IS the mechanism for the hotspot. While your statement that it is more to do with water vapor I accept, I would also point out the water vapor increase is alledged to be cause by increased CO2!!

    If you are slowing the release of energy due to the cooler average emissions altitude you create a hot spot as convection and radiation continues to move energy up until it gets hot enough to radiate the energy away as fast as it is coming up. Since we don’t appear to see this warming in the upper trop, yet we do see changes in tropopause height, it would seem that the average emissions temp does not drop enough to cause the backup..

    As the upper trop is supposed to heat faster than the surface the lapse rate would seem to flatten. is this possible? Again, with this differential it doesn’t seem reasonable that the average emissions temperature would drop as the temp is higher at higher altitude until the tropopause.

    Are there studies that show the lower average emissions temp with an increase of c02 and an increase in tropopause as opposed to modeling? While I have a lot of respect for laboratory work and the models, I also understand that a laboratory typically measures effects in a very isolated environment and the models have already been shown to be deficient.

    Steve #26, I don’t think so? I probably overstated making it seem I believe GHG’s only emit from collisions where it is much more often from collision based on timing of probable events.

    Maybe you can clarify another area. We see spectra looking down at the earth that show large absorption around the CO2 bands. Of course, while CO2 and Water Vapor absorb in this band it also emits in this band. Doesn’t the fact that there is a hole simply show that the energy has been moved to another band through collision? Any suggestions as to what bands this energy is moved to? Any references to studies that cover this?

    Thank you for your time. If you don’t want to bother answering my uneducated or miseducated questions no problem.

  28. Hi Jeff,
    I’ve posted a fairly aggressive comment on Dr Roy’s site (nothing personal). One of the problems with being a true sceptic is that scientific truth can harvest friends and enemies alike.
    To clarify what I thought I was showing on Lucia’s (all under IPCC assumptions), since a number of your posters have raised the issue:-
    1) Any (model) pre-defined Equilibrium Climate Sensitivity (ECS) can be matched to observed average surface temp (GMST) and OHC data by adjustment of forcing data (typically aerosol data)
    2) Even if the forcing data is given, any model can be adjusted to match the GMST and OHC data by adjustment of the feedback algorithms (in the form of the introduction of non-zero coefficients for the non-linear terms).

    My model form for ocean modelling was very simple, but sufficient to illustrate, since it incorporated OHC and surface temperature, that the above conclusions are valid even for more sophisticated ocean models. In fact, it is not too difficult to show that the ocean term, describing the relationship between surface temperature and OHC can be represented as (only) a modification of the coefficients of the temperature terms. A change in vertical ocean temperature profile would not change these conclusions.

  29. 33-I am curious, did you also create a model match to the vertical profile of changes in ocean temps?

    Also, if you could clarify, are you saying that more than one sensitivity could work even with a given forcing, or that one can get a fit so long as one picks either the sensitivity or the aerosol forcing to work? Because it seems to me that given a particular set of forcings, one should be able to achieve a fit with just one sensitivity, and given a particular sensitivity, one would need a particular set of forcings.

  30. Paul,

    I didn’t see your comment there. Don’t worry about disagreement here, except that you may get an argument, it is standard fare at tAV. My interpretation of your model results was that aerosols could be used to correct for literally any climate sensitivity through the measured range. In the case of Roy’s work, we have a big discrepancy between ocean temp observations and predictions. The lack of diffusion of heat through the ocean means to me that the heat isn’t really there. Sure it could be problems in measurement, but barring that, if the oceans didn’t warm, where did the Joules go?

    I do agree that it is still possible that aerosols could have dampened the response in the past, but we have a model mismatch for heat here. Aerosols really can’t make up the difference for that because the lack of heat accumulation means that sensitivity must be lower than models predicted.

    What am I missing?

  31. My questions would be: (a) what if the heat went into the unmeasured deep ocean (b) are we comparing transient warming to a two-box model that uses equilibrium sensitivity values instead of the transient climate response (TCR)- something like what Held warns against here ?

  32. Jeff,
    I’ve just checked and my comment is still in moderation on Dr Roy’s site.
    “My interpretation of your model results was that aerosols could be used to correct for literally any climate sensitivity through the measured range.” Correct. And after that, even if you fix the forcings, including aerosols, you can still show that any ECS is consistent with GMST and OHC by adjusting the “structural form” of the feedback – what the IPCC calls “structural uncertainty” in mny of the WG discussions.

    “The lack of diffusion of heat through the ocean means to me that the heat isn’t really there. Sure it could be problems in measurement, but barring that, if the oceans didn’t warm, where did the Joules go?” I agree, Jeff, that if a model is not explaining accumulated heat, then it is not working. (And, in all probability, it is overestimating positive feedbacks at that particular point in time.) However, that says only that the model is wrong; it does not automatically say that any alternative model is correct, unless the alternative is the complement of the incorrect model. More importantly, the fact that a model – any model – can match these data tells us nothing about the validity of the underlying ECS of the model, and this applies to Dr Spencer as much as to Gavin.

    So, what I am saying is pretty fundamental. Give it your best shot and you can never deduce ECS from just GMST and OHC alone – unless you are God or Spike Milligan’s daughter. You have to invoke other data.

  33. Paul_K #37,

    Yup. Only other data will help, the most obvious being aerosol effects. But since good aerosol data don’t exist, we need alternatives, like divergence/concordance of a model from/with other measurable parameters of the Earth’s climate (as you and others have noted). It is a messy problem, and not one that will be settled soon.

  34. 36-“are we comparing transient warming to a two-box model that uses equilibrium sensitivity values instead of the transient climate response” considering the model fit to PCM with the known feedback of that model, I don’t think that’s a problem. You’d only be using a model with the non-transient response if you integrated to equilibrium. AFAICT Roy integrates over 40 years, matching the data. Similarly, the PCM model was integrated over 40 years, as was the fit to that model with it’s known feedback parameter.

  35. Paul,

    I agree with everything you have written, except that I wonder if recent surface temperatures can result from high sensitivity and still match the ocean data. The heat has to go somewhere, and in your model the DOF allowed the various linear forcing s to be included non linearly. Not that your result is necessarily unreasonable, the high order feedback may represent something physical, I don’t know. Your demonstration was quite convincing that hind casting of models on short timeframes does nothing to guarantee accuracy of future projections or ECS. Roy’s work shows that the forcings + feedbacks are likely lower than IPCC estimates because the heat isn’t piling up as expected.

    It seems to me that we have little to argue about. :D.

    I took a few minutes to read Roy’s spreadsheet and found it fairly straightforward. As always these days, time limits my climate fun.

  36. BTW, I also have a comment still in moderation, I suspect Roy doesn’t check that often. Sadly, it seems that I can’t leave comments there with lots of links without being mistaken for spam.

  37. I dont have the technical background that most of you have, but would like to ask a question:-

    Anyone has to accept that the the Models are overestimating warming, (at least so far). But isn’t the water vapour amplification the most likely suspect? As I understand it (from reading a paper or two by Held and Soden), the Model’s WV amplification occurs very largely in the free troposhere and mainly in the tropics – because there is much stronger upwards convection of WV into the higher troposhere in the tropics? If that is so, doesnt the absense of that hot/wet spot show that that just isnt happening to any large extent? If WV amplification was much smaller than the Models say, that would explain their failure.

    I accept that it could alternatively be aerosols, or more low clouds, or more efficient heat transfer to ocean depths, but all of those possibilities have no strong evidence – whereas on my understanding a weak WV amplification does have strong evidence – that missing hot spot.

    Any comments on that?

    (By the way I read Spencer’s ocean heat article which I personally thought the most convincing evidence I have read yet for a low sensitivity).

  38. Bill #42,
    Different models show different amounts of mid-upper tropospheric warming. So while there is a “lack of a hot spot” in the available data compared to some model projections, there is not much (or any) discrepancy with other models. The real issue is how the models explain the large discrepancy between the measured surface warming and the warming that would be expected based on the (fairly accurately) known GHG forcing (currently ~3.05 watts/M^2) along with the high climate sensitivity calculated by models. As Roy Spencer points out, one of the claimed reasons for very modest warming is the thermal inertia of the system, which implies a large quantity of heat accumulating in the ocean. The other claimed reason is the influence of aerosols (direct and via a change in cloud properties) on Earth’s albedo. If the measured heat accumulation is substantially lower than that expected based on CGCM’s, then that implies the aerosol effects must by higher than claimed by the models. Each model uses it’s own historical aerosol effect, and each is different…. they can’t all be right. Since the magnitude of aerosol effect is essentially unconstrained, a shortfall in ocean heat accumulation can be compensated for by assuming higher aerosol effects; if the ocean heat accumulation is only half of what you thought it was (say 0.35 watt per square meter instead of 0.7 watt per square meter), just dial up the assumed aerosols by 0.35 watt per square meter and that “fixes” the problem.

    Fudging the aerosols upward to fix the ocean heat shortfall does however constrain predicted warming somewhat, since man-made aerosol effects ought to be roughly proportional to the rate of fossil fuel burning. If you assume substantially higher aerosol effects, that implies rapidly growing fossil fuel use ought to also rapidly increase aerosol effects. This produces a trajectory of future warming for rising GHG forcing which is below that of the IPCC model average. The alternative explanation, that the climate is just much less sensitive to GHG forcing than the models indicate, seems to me more plausible than extreme aerosol effects.

  39. 43-Actually the models are fairly consistent amongst themselves about the ratio of warming aloft to the surface warming. Except for RSS, all the observations show a smaller ration than the range of models:

    Christy J.R., Herman B., Pielke R., Sr., Klotzbach P., McNider R.T., Hnilo J.J., Spencer R.W., Chase T., Douglass D. What Do Observational Datasets Say about Modeled Tropospheric Temperature Trends since 1979?. Remote Sensing. 2010; 2(9):2148-2169.

  40. timetochooseagain #44,

    There is considerable discrepancy between the average of the models and much/most of the data. But there is also uncertainty in the data, especially the weather balloon data. Mainstream climate science continues to point at the uncertainty and claim that the models and the data are “consistent”, and that Christy et al are mistaken. I suspect the uncertainty is real, but that the models overstate warming due to inaccuracies in moisture transport/distribution. I agree that the most parsimonious interpretation is that the models are not right, but that view is hotly contested.

  41. #4 Steve Fitzpatrick:

    So where is this new 14um EMR in the upper atmosphere coming from, seeing as it has already been absorbed? Don’t say re-radiation. This is accounted for. The EMR that counts is the EMR emitted by the radiating surface. This is what increases the enthalpy of (warms) the intervening gas.

    Also, in combustion engineering, the path lengths can be substantially higher than current atmospheric levels, so those methods already account for the broadening absorption band. As I said, there comes a point where you can add CO2 with no further impact on enthalpy absorption. At least in every other field other than climate.

  42. 45-Part of what Christy et al. set out to do was show that the uncertainty in tropospheric trends can be substantially reduced if one identifies specific errors and their likely causes in the datasets. RSS’s compatibility with the models (it is on the edge of most observations) is shown to be an artifact of a spurious warm bias relative to all other measures. Taking the various biases in some of the data into account they get best estimates of changes with narrower uncertainty, that pretty much rules out the models being right. Naturally despite showing that the balloon data agree with the UAH data rather well, and that RSS has a demonstrable warm shift that is even present relative to the surface, there will be those who maintain that the tropospheric trends are too low. They cite uncertainty, but as I have said before (comment 20) this is disingenuous. Uncertainty cuts both ways.

  43. #47,

    The objections to Christy et al (and their earlier paper) generally fall into the very large category of publications that I call “You can’t prove the models are wrong that way either”. The best example of which was Santer et al’s response to the original paper. But there have been dozens of others, many with Gavin as one of the co-authors. As I said, it seems to me the most parsimonious explanation is that the models are quite wrong. But note where Christy et al ended up being published… not in Nature, Science, or even GRL, and I don’t think that is coincidental. The climate science community does not (so far) seem to accept that the models are way off in a number of important ways. I expect this will ultimately change, but no time soon.


    I really do not understand what you are saying. Of course there is outgoing 14 micron radiation within (and above) the upper atmosphere. The (small) population of excited CO2 molecules in the atmosphere means that there will be a continuous “re-radiation” at 14 microns in all directions at all times, the intensity of which will depend on the temperature where the emission takes place. The intensity of 14 micron radiation originating from the ground, as measured above the atmosphere is (of course) reduced to essentially zero compared to what it would have been in the absence of CO2, but the total at 14 microns it is certainly not zero, since you can always expect CO2 throughout the atmosphere to be radiating at 14 microns. The absorbance of the atmosphere at 14 microns (per unit length of path) falls with altitude, in parallel with pressure. That the ground radiation at 14 microns is 100% absorbed before reaching space does not preclude emission at 14 microns by CO2… everywhere in the atmosphere. Like I said, I don’t understand what you are saying.

  44. Jeff,
    When I get a minute, I will bolt a high-order integral solution onto Roy’s spreadsheet to test what can and can’t be done with recent data. Actually, I’d feel better about this if Roy reads my comment (still in moderation) and tries this for himself. I think it will unstick him from applying linear feedback models, which can only be a good thing.

    “The heat has to go somewhere, and in your model the DOF allowed the various linear forcing s to be included non linearly. Not that your result is necessarily unreasonable, the high order feedback may represent something physical, I don’t know.”

    The high order terms definitely represent something physical. At the very least the S-B response gives a third-order term in DeltaT – often ignored, because for short-term response to small forcings, the contribution of the higher-order S-B terms can be calculated to be small.

    More importantly, the linear model PRESCRIBES an exponential response in radiative flux AND a short equilibration time. There is no escaping this. If you accept that there is physical evidence for even multidecadal response to a forcing (let alone the IPCC’s multi-century responses), then there must be significant high-order contribution.

    At low values of t and DeltaT (after a forcing at t=0), all the models from low to high order must produce the same shape of rate of change of energy (gain) with time, since they are matching the same heat data. If you plot dH/dt against T, for a fixed forcing, the linear model (by definition) gives a straight line which intersects the Temp=0 line at the forcing, F, and the Temperature line at the ECS. All the models (from low to high order) must asymptote to this line at small values of DeltaT. So the linear model can be validly applied to test rate of change of energy over short time periods. The conclusions from this, however, IMO, can never be used to deduce ECS directly.

    Wish I had a smiley face.

  45. John #46, this was an ancient argument – Angstrom put it in 1907 against Arrhenius theory, and it held until the 1950’s when better spectroscopy came along. BTW, I don’t think any of the people involved would normally be called climate scientists.

    But you can see the answer in the observed spectrum. There’s a dip at 15 μ, but it isn’t black.

    Re-radiation is the key – locally, 15 μ is also the peak emission frequency. The nett effect is that radiation to space happens at 15 μ, but from near TOA, at TOA temperature. A measure of the warming is the change in that effective emission temp. More CO2 drives the effective emission altitude higher – IR intensity drops.

    It’s true that there is potential for saturation there too; when emission is from the tropopause, raising the altitude won’t reduce emission. But I understand we’re not there yet.

  46. 48-I don’t know, Remote Sensing may be a specialist journal, but I don’t think it’s that obscure. I also don’t know the history of this paper, certainly none of the authors reported difficulties, that I know of, publishing these particular findings. But I’d believe it if they did!

  47. PaulK,
    What you state seems to substantiate what Isaac Held has written as I linked to in #11. It seems to me that while the GCM output is a transient temperature response, due to the short equilibration time, Dr. Roy’s model corresponds to an equilibrium temperature response. I’m going to go out on a limb here and use Isaac Held’s description of the TCR as a short-term sensitivity value

    he transient climate response, or TCR, is traditionally defined in terms of a particular calculation with a climate model: starting in equilibrium, increase CO_2 at a rate of 1% per year until the concentration has doubled (about 70 years). The amount of warming around the time of doubling is referred to as the TCR. If the CO_2 is then held fixed at this value, the climate will continue to warm slowly until it reaches T_{EQ}. To the extent that this 70 year ramp-up qualifies as being in the intermediate regime, the ratio of TCR to T_{EQ} would be \beta/(\beta + \gamma) in the two-box model.

    The median of this ratio in the particular ensemble of GCMs referred to in the figure at the top of this post is 0.56. For several models the ratio is less than 0.5.

    Since Dr. Roy found a sensitivity of 1.3C/doubling and the TCR is ~0.5, using Isaac Held’s description here

    No, because the model does not fully equilibrate on the time scale of a century. As already discussed in previous posts, a more useful point of comparison is the transient climate response (TCR), the warming at the time of doubling in a simulation in which CO2 is increased at 1%/year. The model used here has a TCR of about 1.5-1.6K.

    perhaps, Dr. Roy confirmed a sensitivity of 1.3/0.5 = 2.6C/doubling.


  48. RB,
    “perhaps, Dr. Roy confirmed a sensitivity of 1.3/0.5 = 2.6C/doubling”
    I don’t think so. Maybe you should ask him.

  49. 53-you are being impossibly difficult insisting on this misunderstanding. The sensitivity of Roy’s model is it’s equilibrium sensitivity. The response fit to the data is it’s transient response. By you flawed logic, in spite of the fact that we know PCM’s sensitivity, Roy has shown that it’s actual sensitivity is 4.2 degrees for a doubling of CO2 not the known value of 2.1

    I think you are either just totally misunderstanding or being appallingly disingenuous.

    Let me spell it out: Roy has fit the transient response of the ocean to a model with an equilibrium sensitivity of 1.3 C for a doubling of CO2. He fit it’s transient response of the model to the transient ocean warming. Your misunderstanding strikes me as trying to shoot this down as evidence against your view of this issue. Paul_K’s comment has nothing to do with the issue of transient versus equilibrium response, which is a red herring anyway.

  50. TTCA,
    I’m not claiming any deep knowledge or brilliance here. My question was based on PaulK’s comment that in a linear model there is an assumption of short equilibration time. Therefore would it be correct to say that in Dr. Roy’s model fit to the transient response of the ocean, his model implicitly assumes fast equilibration. If so, would the fitted sensitivity value be better comparable to the TCR rather than the GCM’s climate sensitivity? Feel free to ignore me if I continue to offend you, but I’m willing to be shown the error of my ways.

  51. it’s actual sensitivity is 4.2 degrees for a doubling of CO2 not the known value of 2.1
    This is a good point.

  52. Sorry for flying off the handle. Let me tell you why I think that this model does not assume a short response per se. From what I can tell, Paul_K’s point is that higher order models have longer equilibration times, or more allow for them. But this is, I understand it, a relative thing, as a linear model can have a long response time, I just think that if it were non-linear that same model would have an even longer response time.

    Okay, so the key, for me, is that the model and the observations were given the same time to equilibriate. So if reality is out of equilibrium, a good model would also be out of equilibrium. So Roy’s model is of a transient response. The problem, as I understand it with ECS versus TCR is when one compares the equilibrium response to the transient response in the real world. If it’s “apples to apples” doesn’t mean it is necessarily correct, of course. A good point would be that if the response times are different, the model could be giving a sensitivity somewhat too low. The reason I don’t think the model is doing that is because it worked on PCM with the known sensitivity of that model. Certainly the bias wouldn’t be as great as the comparison of the transient response to equilibrium response directly.

  53. 51.Nick Stokes said June 30, 2011 at 8:18 pm “Re-radiation is the key – locally, 15 μ is also the peak emission frequency.”

    Nick, I’m not sure what you mean here in spectroscopy terms. If there is a preferred emission wavelength, then there has to be a set of preferred excitation wavelengths – ore even just one – that couple. What are they and what is their climate origin? Are you using a static or dynamic model in the loose sense that your observed spectrum is due to events days away, or decades away, from the date it was recorded?

  54. TTCA,
    Thanks for your observations. Between Dr. Roy’s model fit to the PCM model output and his subsequent model fit to the observed ocean temperature profile, he changes not just the model feedback parameter but also the diffusivity coefficients. Secondly, I don’t know how the model feedback parameter relates to the equilibrium climate sensitivity. I’m not well-read on this and don’t know what the impact of these two issues might be. I notice that Chris Colose seems to bring up the TCR as well. In any case, I would be interested in any discussions you might have with Chris regarding this issue.

  55. 64-“he changes not just the model feedback parameter but also the diffusivity coefficients.” Fair point, but I don’t think this has the effect of radically altering the result on sensitivity. “I don’t know how the model feedback parameter relates to the equilibrium climate sensitivity.” It’s easy, take the given parameter in Watts per meter squared Kelvin, set it equal to -3.3(f)+3.3, solve for f, and then the equilibrium sensitivity is the no feedback sensitivity divided by 1-f.

    As for having a discussion with Colose, I think I’d have better luck talking to Kim Jong-Il.

  56. TTCA, I think the question is whether the need to change the diffusivity coefficients makes this a case of over-tuning the model, and whether it is indicating some missing physics. For instance, Chris Colose makes the point that purely diffusion-based models are not sufficient. The issue I believe is that you are using one feedback parameter for one set of data to tune the model (i.e., diffusivity coefficients) and then re-solving for the “model net feedback parameter” using another set of data. The assumption I suppose is that you should get a unique answer. But we also have three additional parameters that need re-tuning. The question is whether these two sets of data impose sufficient constraints for the “model net feedback parameter”. Is the model good when the diffusivity coefficients need to be changed again?

  57. Jeff et al,
    I pulled down Dr Spencer’s spreadsheet with a view to testing higher order integration, and discovered that there are two major errors in the spreadsheet, which probably make further conversation on his findings a bit useless, at least until he has had a chance to review and correct the errors, and modify his conclusions accordingly.
    1) Dr Spencer noted (in an update) an error of a factor of about 10 in the heat capacity term, but argued that this was compensated for by a change in the heat diffusion term of the same order. In fact, the argument for compensatory errors is only valid for the calculations below the first layer, where the calculation involves only terms from interlayer heat-flow. The argument is not valid for the calculation of the temperature of the first layer. This critically includes the integration (over time) of the total heat flux due to radiative imbalance – expressed in the model in the form of : F(t) – lamda * DeltaT [F is the forcing, lamda is the feedback parameter and T is temperature] There is no compensatory mechanism for the error in heat capacity, and this introduces a substantial error in the first-layer temperature calculation.
    2) The heat capacity term in the model for each layer is given by 50*418000/86400. It is not clear where these values come from, but it is easily confirmed that the final value from this expression is too small by a factor of about 10. I calculated it should be 2555 on the back of an envelope. However, the heat flow term out of layer 1 into layer 2 includes a factor of 41,800 for the layer 1 calculation and a factor of 418,000 for the layer 2 calculation of heat flow from layer 1 into layer 2, which causes the model to bust conservation of energy.

    These two errors are sufficient for me to throw the towel in. Pity. I’m going to bed.

  58. 60.Nick Stokes said June 30, 2011 at 8:18 pm “Re-radiation is the key – locally, 15 μ is also the peak emission frequency.”

    I’ll put it another way.

    When you compare incoming versus outgoing radiation at top of atmosphere, are you talking contemporaneous radiation? Or does the outgoing have a history of travelling through the earth systems for decades before making it back to the TOA? Is it instantaneous or lagged?

    I’m also coming from this angle as expressed in Wolfram:
    At equilibrium, the radiation emitted must equal the radiation absorbed. The equation holds when the quantities are appropriately averaged over wavelength, but not necessarily at any given wavelength (incident visible light can be reradiated as infrared). Are you assuming that your 15 micron window is wide enough that all significant excitations and then emissions can happen within the transparent bandwidth and so satisfy Kirchoff?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s