the Air Vent

Because the world needs another opinion

If the Square Peg Doesn’t Fit – Get a Hammer!

Posted by Jeff Id on February 28, 2014

It seems like I just got done writing a post which incorporated the point that Real Climate leaves much to be mocked, and low-and-behold Gavin Schmidt deals us a whopper.  A fantastic new paper was written as a comment for Nature called “Reconciling Warming Trends”, which proports to explain the lack of observed warming which directly contradicts the bulk of the climate models.   The first thing the media should take note of is that these scientists have finally noticed what us evil skeptics have been telling you for several years –the predicted level of warming didn’t happen!   It is warming, but not enough to be a problem, and that IS a big problem for the multi-billion dollar climate industry.

As recently as February 2013, Real Climate had their heads in the sand on models with this quote:


The conclusion is the same as in each of the past few years; the models are on the low side of some changes, and on the high side of others, but despite short-term ups and downs, global warming continues much as predicted.
In the meantime, more than this one paper was being published that claimed the opposite.  And recently Roy Spencer made a cute plot for which the only rebuttal I’ve heard is that he chose an inconvenient starting year.   Not that it changes the result much:
So for the media who don’t read things like ‘papers’ or data, the blue and green dotted lines have lower slopes than the climate models, therefore the models predicted more warming than was observed.   Just like the Koch funded unfunded skeptics told you.
But this new paper by Gavin A. Schmidt, Drew T. Shindell and Kostas Tsigaridis (Schmidt 14) is a true gem.   The crew looked at several observed factors in climate since their last runs and found different values for the years 1990 – 2012. They looked at human aerosols, solar irradiance changes, volcanic aerosols and a “very slightly” modified level  of greenhouse gas forcing.
ScreenHunter_02 Feb. 28 19.48
The resulting change in model forcing brought the models in line with observation — almost.  Well they still are higher than any actual observation but adjusting moisture feedback (a large and uncertain factor) is not a sanctified IPCC consideration.
ScreenHunter_03 Feb. 28 19.49Of course they only show the years since 1990 which is hilarious considering that they are addressing a massive failure of the centennial-scale models to predict even a decade into the future.   Note that despite the efforts to “find” an explanation, moisture feedback, the greatest unknown in climate modeling, was not even mentioned.
Still, there is one tiny elephant in the Real Climate corner.   A claim as specious as the claim Michael Mann makes of being exonerated from wrongdoing by the fake Muir Russel climategate report, yet very often made by the Real Climate crowd.

Climate Models are Not Tuned to Observation

For the heck of it, I searched Real Climate for phrases like – ‘not tuned’.

From RC Frequently asked questions:  Are climate models just a fit to the trend in the global temperature data?

No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below).

Gavin comment response: [Response: If you read our papers (and my comments) we are completely up front about what we tune to – the climatology (average annual values), the seasonal cycle, the diurnal cycle and the energy in patterns like the standing wave fields etc. We do not tune to the trends or the sensitivity. – gavin]

Gavin comment response: I’ve said this before, and I’ll say it again, models are not tuned to match long-term time-series data of any sort. – gavin]

Gavin comment response: [Response: The AR4 runs were done in 2004, using observed concentration data that went up to 2000 (or 2002/3 for a couple of groups). None of them were tuned to the HadCRUT3 temperature data and no model simulations were rejected because they didn’t fit. – gavin]

Comment and Gavin response:

It seems clear that each model is tuned to match past temperature trends through individual adjustments to external forcings, feedbacks and internal variability. Then the results from these tuned model are re-presented (via Figure 2 above) as giving strong evidence that nearly all observed warming is anthropogenic as predicted. How could it be anything else ?

[Response: You premise is not true, and so your conclusions do not follow. Despite endless repetition of these claims, models are *not* tuned on the trends over the 20th Century. They just aren’t. And in this calculation it wouldn’t even be relevant in any case, because the fingerprinting is done without reference to the scale of the response – just the pattern. That the scaling is close to 1 for the models is actually an independent validation of the model sensitivity. – gavin]

What is clear to most of us “skeptics”, and should be very clear to any semi-technical type, is that in modeling, with hundreds of tweakable parameters, if the output doesn’t match the observations, you go back and tweak the input until it does.  Gavin’s insistence that models aren’t tuned, is simply his own bias forgetting those hundreds of times when he put CO2 forcing in upside down or with a ridiculous weighting by accident or by test and the result didn’t look at all like he expected, so he adjusted things.   He and many others rightfully find it easy to justify the adjustments post hoc – e.g. the paper they just published.  It’s not wrong to adjust the model, they should match the data, but they universally, definitely and regularly are adjusted until the output matches some observation.
In this case, the models were so far out of whack, they quietly admitted that the skeptics were right, and adjusted their favorite inputs only.  Other inputs were quite thoroughly left out.  What is more is that most of the inputs had little effect but by ‘re-analysis’ they made massive corrections to volcanic forcings, only in the recent time-window to correct recent trends.
Oddly enough, I think this sentence from their paper’s conclusion represents my own thoughts best:
Nevertheless, attributing climate trends over relatively short periods, such as 10 to 15 years, will always be problematic, and it is inherently unsatisfying to find model–data agreement only with the benefit of hindsight.
For my own conclusion, I am highly skeptical that they got any model-data agreement if the process is hindcast.   I’m also completely unimpressed with the kind of numeric mashing used to claim that models are still somehow ‘on the right track’ but this next sentence in their conclusion is completely unjustified/unsupported/unimagined by any aspect of this paper:
We see no indication, however, that transient climate response is systematically overestimated in the CMIP5 climate models as has been speculated8, or that decadal variability across the ensemble of models is systematically underestimated, although at least some individual models probably fall short in this respect.
There is no analysis in the article of expected short term variance which could possibly explain the models failure.   It simply doesn’t exist.  This primary aspect of Gavin’s conclusion is much more like a prayer to Gaia than an article of sicence.
As is often the case the Real Climate train-wreck provided us some solid entertainment.   I wonder how many more decades will pass before they will figure out that the modeled climate feedback sensitivity looks a little high?

40 Responses to “If the Square Peg Doesn’t Fit – Get a Hammer!”

  1. omanuel said

    Climategate exposed the tip of an iceberg of international deceit that grew out of sight after WWII about:

    1. Japan’s atomic bomb facility that USSR troops captured off the east coast of Korea
    2. Neutron-repulsion as the energy source in cores of heavy atoms, stars and galaxies
    3. “The Sun’s origin, composition and source of energy”

    Click to access lpsc.prn.pdf

  2. Andrew said

    Gavin’s inline comments about fitting modes to data are flat out lies, putting it mildly.

    That being said, there is something very much amiss here.
    First of all, what justification is there for decreasing the magnitude of Pinatubo? I can see how assuming constant volcanic forcing post 2000 would need to be changed, but why before then?

    Second, it’s just a wrong procedure to use “observed ENSO” to cheat models into matching the wiggles, and hide behind the range of model internal variability. It’s permissible, albeit still wrong, to do one or the other. But definitely not both.

    Finally, regardless of the reason, we now know definitively that models projected to high. The outlook for the future is not as grim as these people have been claiming.

  3. The misleading thing about Roys graph is that in reality using HADCRUT4 , 1998 was warmer than than 2013 by .045C.

    Roy claims to be using running 5 years means and they ar eoverly kind to the models.

    I prefer this perspective on things:

    • Andrew said

      Using running means on both the observations and the models is meant to illustrate that it’s not mere interannual “noise”, but long term trends, that differ between the series.

  4. When the planet freezes from record high temperatures, they will still be complaining about global warming and proving it with adjusted data. The new norm is that “snow in June is normal, but you will not see it soon”.

  5. Brian H said

    More like using a chisel on the data than a hammer!

  6. Kenneth Fritsch said

    Jeff, I think we can take the scrambling by climate modelers to explain the 15 year warming pause a sign of concern on their part and probably a substantial loss of confidence in their models. Of course, Gavin, as an advocate and climate scientist is not going to admit to any problems. The telling part of what he did was an attempt to explain away the pause with “new” inputs into the climate models. That in itself, and without an admission on the part of the modelers and the modelers defenders, is an acknowledgement of a weakness in the models. It appears that some who make these model adjustments want to show that the sources are unprecedented and even an adjunct of AGW and that they are therefore not oblige to answer the question: If the models see and account for something missed in the last 15 years what does that portend for the effects before that time?

    I get a particular kick out those who attempt to explain the pause with deterministic causes and then like evidently, Gavin, noting that stochastically and due to “weather” noise, a pause of this length, although with low probability, is a possible explanation. They should make up their minds which it is because it cannot be both. If it is weather noise then that proposition adds to the uncertainty of even determining how well the models can be validated by the instrumental record.

    I have made some calculations of the trends in the observed temperatures and mean temperatures of CMIP5 models over the period 1964-2013 and can show that the model trend is statistically and significantly greater than that from the observed data for that period. Many complain about a short period to make this comparison and then proceed never to use the extra DOFs that a comparison over a longer period provides.

  7. John Norris said

    If your model sucks, and you fine tune it to observations you usually make things worse. If your model is fundamentally good you have a much better chance of fine tuning and improving it. I think this is going to be fun.

  8. Steve McIntyre said

    Jeff, nice to see your recent posts. Have you been able to locate Schmidt’s revised forcing data?

    • Jeff Id said

      I have not found it. You are right that it is the main purpose of the paper and it seems likely that whatever was done to improve the data might be interesting.

  9. Andrew said

    Your pricey stock broker promises a ten percent return in ten years. He says “look, I have a model whose strategies, if they were followed by an investor in the early 90s, would have seen a 7 percent return. This same model predicts 10% returns in the next decade.”

    So you pay him to invest your money. And then in ten years, your return is zero. He says, look, I underestimated emerging markets and blah blah blah buzz words. Factoring these in, my model for returns is still within the margin for error. Stick with me, and you’ll see 20% returns over the next ten years, I guarantee it!”

    Do you stick with him, or fire him? And does why he was wrong factor into your decision? Or are such details irrelevant.

  10. j ferguson said

    Are all noodles the same in the dark?

    There is a lot of spaghetti up there. Do they all have similar TOLT’s? Time Of Last Tweak? I ask because I suspect that untweaked model performance would be even worse than these plots indicate.

    On the other hand, do they offer an opportunity for data-mining. Suppose you chopped them up in 5 year noodles and then did recursions (right form of analysis?) against the record. It looks as though a model which goes wild in the late 90’s. might track accurately in the following decade. One might ask how those models get it right for short periods assuming it wasn’t done manually- (no smart remarks here) and miss wildly in other periods.

    It could be bad math, or an effect which comes and goes, or cyclical effects with very long periods, or ??

    It does look a bit strange that some of the noodles above do show the leveling off, but after earlier excessive increase.

  11. […] as February 2013, Real Climate had their heads in the sand on models with this quote: – Click here to read the full article […]

  12. D o u g   C o t t o n said

    Why are climate models wrong?
    [snip – probably because they estimate the forcing feedback too high}

  13. steveta_uk said

    And recently Roy Spencer made a cute plot for which the only rebuttal I’ve heard is that he chose an inconvenient starting year.

    I asked one of those who made this objection how you could claim a “cherry-pick” for the start date, when he plots all data available, since a 5-year running mean with sattelite data starting in December 1978 necessarily means that chart starts in 1983.

    I got no answer ;(

    • timetochooseagain said

      Nick’s objection was just nonsense, a racehorse out the gate who didn’t even bother to check what the data in question was. Just sloppy, shameful really.

  14. D o u g   C o t t o n said

    The “hammer” drives the last nail in the GH coffin …

    Loschmidt was the brilliant 19th century physicist who was the first in the world to successfully estimate the size of air molecules – within a factor of 2 or so anyway. We can assume Loschmidt thought about what those molecules did, and, with the knowledge of the fact that gas molecules were far smaller than the space between them, the world saw the beginning of Kinetic Theory being applied to “ideal” gases with documented assumptions that I encourage you all to read, because Kinetic Theory was successfully used by Einstein and others, and from it we can derive the well known ideal gas laws. We can also derive (in just two lines) the magnitude of the so-called dry adiabatic lapse rate without using those gas laws or any pressure data.

    It’s not hard to visualise what Loschmidt did, namely molecules moving around at random and colliding with others rather like billiard balls. When they collide they share their kinetic energy, and as a result, we see diffusion of kinetic energy which results in a tendency towards equal temperatures in a horizontal plane. We have all observed such diffusion in our homes when warmth from a heater spreads across the room.

    But, when those molecules move in free frictionless flight between collisions the assumptions of kinetic theory include the “classical treatment” of their dynamics, noting that “because they have mass the gas molecules will be affected by gravity.” And so Newtonian mechanics tell us that the sum of kinetic energy and gravitational potential energy remains constant.

    But, as a gas spontaneously approaches thermodynamic equilibrium it is approaching a state in which there are no unbalanced energy potentials. That state is isentropic, having (PE+KE)=constant at all heights, and this means that KE varies and, as Kinetic Theory tells us, temperature also varies in proportion to the mean kinetic energy of the molecules.

    It does not matter that the final state is never completely materialised, and so entropy will still be increasing. We are considering what happens as we approach a limit, just as in calculus. Entropy will keep increasing until that limit is achieved, but it never is because, with a new day dawning more solar energy is added causing a significant disturbance to the process and moving it further away from equilibrium. Never-the-less, by the following night if there are calm conditions, the state of thermodynamic equilibrium will again be approached.

    Over the life of the planet the temperature gradient has obviously evolved on all planets with significant atmospheres, and it also occurs in sub-surface regions such as Earth’s outer crust and inside the Moon.

    The empirical evidence is that Loschmidt was right and that Maxwell erred on just this particular issue wherein molecular studies were perhaps not his specialty. The huge significance of this is that there is no need for any greenhouse radiative forcing to explain planetary atmospheric and surface temperatures. These cannot be explained at all by radiation calculations – only by the gravity gradient. The trillion dollar question is thus, was Loschmidt right?

    • Jeff Id said

      You made it all the way to the last paragraph before crashing. Getting better Doug.

      • D o u g   C o t t o n said

        Jeff if you think you can call on any physicist to debate me in Roy Spencer’s blog, rather than just making assertive statements like this of yours, then they could start by reading my many existing comments above this last one just posted. At least Roy Spencer never snips a word of what I write and we’d have a level playing field there.

        • Jeff Id said

          We tried to debate Doug — for months. You lost every single point, you even changed your tune, yet you still stomp on as though it didn’t happen. If you actually answered specific questions about matter-energy interactions that support your theory rather than ramble on about bulk conclusions as you do, we might be able to talk.

          But you have made it quite clear both here and at the numerous other heavily polluted blogs, that you will not answer specific questions. Unfortunately, you have also made it quite clear that you do not understand standard thermodynamics well enough to explain it, so you cannot clearly describe where your ever-morphing (and flatly wrong) theory of “CO2 can’t warm planets” comes from.

          Debate with you has become pointless and you take the blame for it being pointless. The fact that I recognize the pointlessness is not my fault.

          Now if you wanted to say……discuss how photons interact with matter, I might make a thread specifically for you but it would be under the rules that you don’t make conclusions about bulk properties. Be warned, you might get tied in knots again.

        • steveta_uk said

          At least Roy Spencer never snips a word of what I write and we’d have a level playing field there.

          Roy appears to be a very busy man – he very rarely comments at all, except sometimes on the first few replies to one of his posts.

          So I suspect that the reason Roy doesn’t snip you is that he doesn’t even read most comments on his blog.

  15. D o u g   C o t t o n said

    You are wrong in assuming Loschmidt’s gravitationally induced thermal gradient does not evolve spontaneously in a gravitational field. It is the isentropic state of maximum entropy with no further unbalanced energy potentials. You cannot explain why the Venus surface temperature rises by 5 degrees spread over the course of its 4-month-long day with any radiative forcing conjecture or greenhouse philosophy. The Venus surface receives barely 10% of the direct Solar radiation that Earth’s surface receives. It would need over 16200 W/m^2 if radiation were heating the surface. Then, during sunlit hours it would need an extra 450W/m^2 to raise the temperature from about 732K to 737K. On Earth, if isothermal conditions were supposedly existing without water vapor and other greenhouse gases, then the sensitivity to water vapor would be about 10 degrees per 1% atmospheric content. But there is no evidence that a region with 1% above it is 30 degrees colder than another region at similar altitude and latitude with 4% above it. The surface layer of Earth’s oceans may be consider only 1cm thick, or even if 10cm thick it is still very transparent to insolation. But a black or grey body does not transmit radiation, and the surface layer absorbs less than 1% of that incident solar radiation. So the S-B calculations are totally incorrect and planetary surface temperatures cannot be calculated using such.

    This is where the error crept in in 1985 …

    “Coombes and Laue concluded that answer (1) is the correct one and answer (2) is wrong. They reached this conclusion after finding that statement (2a) is wrong, i.e., the average kinetic energy of all molecules does not decrease with the height even though the kinetic energy of each individual molecule does decrease with height.

    These authors give at first a qualitative explanation of this fact by noting that since both the kinetic energy of the molecules and the number density of molecules decrease with height, the average molecular kinetic energy does not necessarily decrease with height.”

    This is absurd. They had the mean kinetic energy decreasing in each molecule, but then they divided again by the number. Try calculating a mean by dividing twice by the number of elements. A glaring error. The Loschmidt effect has NOT been debunked by this nonsense.

    Velasco, S., Román, F.L., White, J.A. (1996). On a paradox concerning the temperature distribution of an ideal gas in a gravitational field, Eur. J. Phys., 17: 43–44.

  16. D o u g   C o t t o n said

    It’s so obvious. Start with a vertical cylinder with three equal sections and removable partitions. Create a (near) vacuum in the top and bottom compartments, and fill the middle compartment with non-radiating argon, allowing it to settle into thermodynamic equilibrium. Now remove the partitions and some molecules will enter the top compartment, losing KE as they rise, whilst some will enter the bottom compartment gaining KE.

  17. hmmm said

    It is incredibly important what they don’t show. 1) they only show one inbred circular logic set of assumptions to show they could be right based solely on this short time period. They don’t mention the other possibilities let alone vett why what they chose is likely (there are millions of ways to curve-fit with this many variables involved). 2) they don’t show the hindcast which is a preliminary (though weak) way to check models vs. reality. 3) they don’t make a prediction, which is ESPECIALLY ODD SEEING AS THE POINT OF THIS PAPER WAS TO RECONCILE PREDICTIONS WITH REALITY. They say that they’ve fixed it, and they admit “…it is inherently unsatisfying to find model–data agreement only with the benefit of hindsight”, but as far as I can see since I can’t find a free version of this paper, they don’t offer any projections even now, after having claimed to have fixed the problem!

  18. D o u g   C o t t o n said

    We don’t need the excuse of assumed carbon dioxide pollution to justify spending some of the humanitarian aid money we give the third world on “cheaper” energy production. Cheaper than coal?

    I have shown with valid physics and an empirical study being published in April that all the carbon dioxide in the atmosphere actually cools by perhaps less than 0.1 degree and certainly has no warming effect what-so-ever.

    If you wish to debate me, please first read all my comments from this one on that thread ..

  19. MikeN said

    Jeff, check out

    In the middle, Ray-Pierre casually mentions
    “The graph is from the NRC report, and is based on simulations with the U. of Victoria climate/carbon model tuned to yield the mid-range IPCC climate sensitivity.”

    However, models aren’t tuned to yield a desired result.

    So RealClimate was putting up posts saying ‘Nothing to see here’, while they were working on a paper that said the opposite?

    We shouldn’t be surprised. I can’t find the post, but there was a guest post at RC years ago that suggested global warming would pause for 20-30 years and then come back stronger than ever. At the time, I guessed they were hedging their bets, and when I really started taking the global cooling talk seriously.

    • timetochooseagain said

      No see, they are tuning models to models.

      I mean, Gavin is completely lying when he says they don’t fit to the historical series. He’s just lying.

      But in this case they are tuning a model, to an average of models.

  20. D o u g     C o t t o n said

    You may wish to included Teofilo C. Echeverria (author of the paper I linked on Roy Spencer’s blog) in your list of “cranks” because it appears that he and I were the first in the world (back in late 2012) to work out what’s really happening in planetary atmospheres in regard to the process of “heat creep” which explains …

    1. Why Earth’s atmospheric and surface temperatures have nothing to do with carbon dioxide or radiative greenhouse hoaxes, and why water vapour cools.

    2. Why the thermal gradient in Earth’s crust is over 25K/Km, but this reduces to about 1K/Km in the mantle.

    3. Why the core of the Moon is hotter than the surface ever is.

    4. How the required energy gets into the surface of Venus, the troposphere and below in Uranus, and into all planets and satellite moons throughout the universe.

    5. Why planets in our Solar System are not still cooling off, but instead are being kept warm by the Sun.

  21. D o u g   C o t t o n    said

    The truth of the matter is that there is no valid physics which confirms the greenhouse effect or any warming sensitivity to carbon dioxide.

    [snip – Doug, go away with this nonsense.]

    • Anonymous said

      No – you produce the physics that you think justifies promulgating what you do on this blog. I will be able to debunk what you say with valid physics, and then you should do the right thing based on the truth.

  22. John W. Garrett said

    A very useful post, Jeff— and the title is inspired! Thanks.

  23. Jim Z said


    You said “…The first thing the media should take note of is that these scientists have finally noticed what us evil skeptics have been telling you for several years…”

    Instead of saying ‘evil skeptics’ and ‘skeptics’ (elsewhere), you should start saying ‘normal people’; it is the truth, and it sounds more harmonious…

  24. […] On other matters, we also know with certainty that climate models run too hot when compared to these adjusted observations.  That said, some of the deeply ensconced climate alarmist types in the mainstream of the climate field have still failed to admit what is painfully obvious at this point, while other main stream types have moved off message to make corrections to the models.  Basically my own really obvious “certainty” is still being argued with in ridiculous fashion in some die-hard corners of the climate science field. […]

  25. […] Since models have obviously failed, someone should phone Gavin Schmidt. […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: