This appears to have been a stock item pulled the shelves of the CRRT day one outbreak suppression library. I expect it was mostly boiler plate, dusted off and back filled with banalities appropriate to this usage. One data point does not a trend make, however, so I’ll wait for two more such miracle births from the editorial express before I make fast my suspicions.
The paper is so important he put together a whole slide show and put it up on youtube the day it is published. Talk about fast track.
Actually, it strikes me as a kind of PR move.
I’ll be honest, so far I don’t get what he seems to be arguing. Lead lag analysis only shows a slight difference from models when tweak in a frankly suspicious way, and since the models show this same behavior, it doesn’t matter?
Um, Roy has been arguing a long time that in fact models also show biased sensitivity estimates if you just blindly regress the anomalies of flux on temperature, especially at zero lag.
His point is apparently, that this doesn’t mean that the clouds are causing temperature variations. And it apparently doesn’t because we all know clouds don’t vary in climate models except as feedback, right? (actually, on the short term they do appear to vary independently, I think) So models show this pattern just means that it doesn’t mean anything…except that if you don’t account for it in assessing model sensitivities it biases the results…which as far as I can tell Dessler hasn’t shown to be wrong.
My understanding is that the discussion by Dessler is that the variance due to energy content of the ocean far outweighs cloud forcing. At this point, I’m trying to understand the data itself so that I can post intelligently on the subject. Isn’t most ocean surface temp variance caused by the piling and spreading of warm water against continents as caused by warm currents? It is pretty clear that Dessler has assigned it a full ‘warming’ energy value, as though ENSO was actually a volume warming and cooling, rather than piling up warm water allowing cool water to be exposed or the inverse.
5-If that is what he is arguing then his problem is he has failed to show that cloud forcing doesn’t exist, just that it appears to be small compared to ocean heat flux. I’m not sure about that. Anyway, you make an interesting observation about the way he is describing ENSO variations. It appears to be in contrast to the usual idea that ENSO is a redistribution of heat, rather than involving net gain and loss. If so, which may actually be reasonable, I don’t know, I am not sure how it relates to Roy’s arguments.
Other people probably could comment more thoroughly on ENSO variations than me.
Our government’s response team was a little less rapid in releasing isotope data from the 1995 Galileo probe into Jupiter that confirmed 1975-1983 reports [1-5] that Earth’s heat source is NOT a steady H-fusion reactor, as assumed in SSM and AGW models.
Dr. Dan Goldin finally released the data himself in 1998, when asked to do so while being video recorded on CSPAN News . (The video recording is available on request.)
1. “Elemental and isotopic inhomogeneities in noble gases: The case for local synthesis of the chemical elements”, Trans MO Acad Sci 9, 104-122 (1975)
8-With regard to the last statement, only three of the models Dessler analyzed were within the confidence intervals of his version of Spencer’s graph at the time of peak response (three or four months?) the thing these models apparently have in common, is that they are supposedly better at simulating ENSO than other models are. This may or may not be true:
On the other hand, none of the models are within the confidence intervals shown for how Roy calculated the results. The difference between them seems to have something to do with Roy using HadCRUT and Dessler doing…something else, I’m not clear.
Now, is the agreement of some models with Dessler’s version of the data related to their sensitivity? Well if both higher and lower sensitivity models are worse, then I would say no. Which would mean that there is just not enough information in the flux and temperature data when analyzed this way to determine the sensitivity. Roy actually said as much in his paper. So I’m not clear what the issue is. Dessler strangely seems to think there actually IS enough information to determine sensitivity (despite showing essentially with his own analysis, that this is not true) even at zero lag, despite the fact that his claimed cloud feedback slopes were not statistically significantly different from zero!
According to Pinker et al., 2005, surface solar irradiance increased by an average 0.16 W/m^2/year over the 18 year period 1983 – 2001 or 2.9 W/m^2 over the entire period.
This change in surface solar irradiance over 1983 – 2001 is almost exactly 1.2% of the mean total surface solar irradiance of the more recent 2000 – 2004 CERES period of 239.6 W/m^2 for which the mean Bond albedo has been claimed to be 0.298 and mean surface albedo to be 0.067 (Trenberth, Fasullo and Kiehl, 2009).
The ISCCP/GISS/NASA record for satellite-based cloud cover determinations suggests a mean global cloud cover over the 2000 – 2004 CERES period of about 65.6% and over the entire 1983 – 2008 27-year period a mean of about 66.4±1.5% (±1 sigma).
ISCCP/FD and Earthshine albedo data for the 2000 – 2004 period enables estimation of the relationship between albedo and total cloud cover and it is best described by the simple relationship:
Bond albedo (A) ~ 0.353C + 0.067 where C = cloud cover. The 0.067 term represents the surface SW reflection (albedo). For example, for all of 2000 – 2004; A = 0.298 = 0.353 x 0.654 + 0.067
According to ISCCP/GISS/NASA mean global cloud cover declined from about 0.677 (67.7%) in 1983 to about 0.649 (64.9%) in 2001 or a decline of 0.028 (2.8%).
This means that in 1983; A ~ 0.353 x 0.677 + 0.067 = 0.305
and in 2001; A = 0.353 x 0.649 + 0.067 = 0.296
Thus in 1983; 1 – A = 1 – 0.305 = 0.695
and in 2001; 1 – A = 1 – 0.296 = 0.704
Therefore, between 1983 and 2001, the known reduction in the Earth’s albedo A as measured by ISCCP/GISS/NASA should have increased total surface solar irradiance by 200 x [(0.704 - 0.695)/(0.704 + 0.695)]% = 200 x (0.009/1.399)% = 1.3%
This estimate of ~1.3% increase in solar irradiance from cloud cover reduction over the 18 year period 1983 – 2001 is very close to the ~1.2% increase in solar irradiance measured by Pinker et al (2005) for the same period.
The period 1983 – 2001 was a period of claimed significant global (surface) warming.
However, within the likely precision of the available data for the above exercise (perhaps of the order of say ±0.5% at ± 2 sigma?), it may be concluded that it is easily possible that the finding of Pinker et al (2005) regarding the increase in surface solar irradiance over that period was due to an almost exactly equivalent decrease in Earth’s Bond albedo resulting from mean global cloud cover reduction.
He might be making some good points, after all the energy content of the oceans and changes therein is vast.
However his assertion that climate models are probably essentially OK if I understood him is not something to me at least borne out by the facts. His remark at the end that “ideas which purport to overthrow decades of accepted science are almost always wrong” is utter banality. One could note that hitherto almost all ideas that had in the past been held as true for decades were overthrown.
I would like to note that the theory of the candle is pretty essentially OK but that doesn’t mean there is no room for the light bulb – unless of course you make candles. With this in mind I, opportunistically, draw your attention to advances in forecasting (and climate) science which we have made at WeatherAction and which decades of standard approaches cannot grasp, namely our power to predict the detailed advent, timing, formation region and track of Atlantic region Tropical Storms from eg 12 weeks ahead, and better the short range standard model projections of storm track using our solar (+Lunar) based perturbations. Please have a look at these pdfs and related links;
When our forecasting breakthroughs in the UK became apparent last December when Heathrow was snowbound I received a call from the Mayor of London Boris Johnson. However in the USA, land of enterprise, our offer to assist the authorities save $millions in cash and hassle in unnecessary precautions against storms which will go elsewhere as well as lives has so far been met with silence thundering across the Atlantic.
For those of you who would like to see our ‘EndGame’ forecast on how the track of the present storm Katia will go I have to say this is interesting but ONLY available to subscribers to our long range Tropical Storms forecast – explanation and access via http://bit.ly/otfTjL
WE will make such forecasts public when the US Authorities decide to do the best they can for their people, but for now it seems they feel candles are good enough.
Where is the beef Andy? Make your frickin’ case, but don’t tell people to ‘trust you’, because they don’t! You would do better to offer substantive technical arguments, and to give Spencer and Lindzen an opportunity to directly address those arguments. Stop assuming nobody outside of climate science can handle the technical issues, because lots of people most certainly can. You see Andy, climate scientists have poisoned the well by behaving so inappropriately (as blatant political advocates) for so long that only by acting like Ceasar’s wife will they ever regain their credibility. More of the same inappropriate behavior won’t help.
The video clip is nothing but dumbed-down drivel (checkbooks?!?). His message is in essence an appeal to authority…. in this case his own and that of main-stream climate science. Will climate scientists ever get past the “you are too dumb to understand this” POV? The evidence suggests not. I really wonder if Dessler can appreciate that talking down to people, no matter how nicely you try to do that do that, is not ever going to win debates. Working behind the scenes to keep Lindzen and Spencer from publishing future work, or even formally replying to Dressler’s ‘refutation’ is not a productive approach either.
Sometimes I get the suspicion that climate scientists (pro and con) are like a couple of high school football teams arguing about which is the best football team in the world without either realizing that there is an NFL.
If you enjoy really sick humor, you will love this photograph of of U.N. Secretary General Ban Ki-moon and Australian Prime Minister Julia Gillard with a sincere greeting for members of the Parliament in Canberra:
I think the data is a mess. It does have a consistent lag pattern which I haven’t looked closely at. It could be either autocorrelation or a single (or a few) large co-related spike(s) in the middle of the data — or even a general long term relationship. I don’t know right now.
The main point is that while the Dessler comparison of Rclouds to temperature is not significant (at conventional levels) the comparison of Rclouds to differenced Gtemperature is significant. This ties in with the point that the autocorrelation of Rclouds is low, and the autocorrelation of Gtemp is high, so they cannot be directly regressed, they are apples and oranges. Its also consistent physical theory: ie temperature change and forcing are both quanta of heat per unit time.
The observations of Steve Mcintyre of arbitrary correlation sign are to be expected when you compare apples and oranges. The fact that there I have found a significant relationship, and that it’s what you expect from agreement of units, and analysing the system in the context of a physical model of heat accumulation by the ocean, and not just regressing two variables blindly, tells me that this is not just a crackpot idea.
The lags of 4 months are there, but once again, a basic physical model tells you that a 3 month lag is to be expected from the phase shift of the derivative (or integral) an annual cycle. Its probably an artifact of annual variations.
I don’t fully understand all these analyses, but it certainly helps to interpret them in terms of a physical model, as it eliminates a whole lot of arbitrariness. Appreciate your thoughts.
If I were Dessler, I’d get this video off the air pdq.
The values he is using are his estimates of standard deviations of the various components in the equation. I’ll return to the reliability of his estimates in a moment. So we now see that a PhD in climate science has a video out proclaiming to all and sundry that (a) flux is the same as energy and (b) that
if A = B+C-D, then their standard deviations (sd) have the following relationship:-
sd(A) = sd(B) + sd(C) – sd(D). Who’d a thunk it? These two things are cringingly embarassing.
With respect to the quality of the estimates, Spencer has already demonstrated that Dessler has bypassed rather more reliable data to get to his estimates. Fluctuations in OHC flux can be estimated from ARGO data, as interpreted by Levitus, Willis etc. Dessler tries to estimate it directly using a large constant Cp times the sd of dT/dt using monthly time series of surface temperature. Secondly, on the RHS of this equation Dessler estimates DeltaR (just using his own derived cloud CRF and no other forcing) and the LW feedback term (lamda*DeltaT) separately and concludes (partly via his schoolboy error with respect to how to calculate the variance of a difference term) that the two terms together are negligible. Spencer points out unarguably that the total expression (DeltaR – lamda*DeltaT) is the net radiative flux imbalance – and that this compound term must reconcile with the TOA flux measurements.
As I said, I think that Dessler should pull the video quickly before he becomes a laughing stock.
“Spencer claims he used the 3 most sensitive and 2 least sensitive models in his graph.
Dessler seems to claim that models with middling sensitivity compare a lot better.
Is this true?”
Possibly. If you think of something like hight then the upper and lower values bracket the total data set. However for something like a tracking algorithm the upper and lower sensitivities might not be the best followers. At high sensitivity the algorithm reacts too wildly and at low sensitivities its too slugish. The middle values give the best perfomance.
However for something like a tracking algorithm the upper and lower sensitivities might not be the best followers. At high sensitivity the algorithm reacts too wildly and at low sensitivities its too slugish. The middle values give the best perfomance.
Possibly as well, though not necessarily and, I do believe, Spencer has shown this is not the case here.
23, 24-I believe the argument being made is not that the models are better because they happen to have middle-of-the-road sensitivity, but rather that they realistically (relatively speaking) simulate ENSO. So the situation is, as I see it:
in general there is marginally better performance by low sensitivity models compared to models with high sensitivity. Much more improvement occurs from better ENSO simulation. So the obvious question is: would realistic ENSO combined with fairly low sensitivity result in agreement better still? We can’t know, because only a very small number of models have been improved in terms of ENSO simulation, these all happen to fall in about the middle of the sensitivity distribution.
I would agree with you in that Bart is suggesting that straight regressions and lagged regressions don’t give particularly insightful understandings into cloud/climate interaction. The impulse response he has shown suggests a negative feedback on a scale of years rather than months. Nick cautions as to length of data, window width, vs time scale of his impulse response function for possible errors.
“Nick cautions as to length of data, window width, vs time scale of his impulse response function for possible errors.”
All analysis has traps for the unwary. What you have to ask is, given the nature of the system, what is appropriate? It may be more complicated, and so harder. Including phase dynamics make it more complicated, but phase provides an independent dimension for additional verification, and so gives more confidence in the result.
Science has been this way, in many areas, for a very long time.
Before now, however, the public was, for the most part, shielded from the grant money shenanigans perpetrated by American scientists seeking money. Prior to the CAGW fiasco, the only people they had to impress, scientifically or politically, were their grantors.
However, scientists, of their own volition, chose to involve the Americal public in the CAGW discussion and, because they’d gotten away with cheating for so long on their grant proposals, they (apparently) assumed the public would buy anything they published as long as it was in a “respectable” scientific journal and “peer reviewed.”
After all, their grantors accepted and funded just about anything they requested with respect to CAGW.
A lot of PhD’s in science truly do believe that the rest of us are totally ignorant of all things scientific (MM comes to mind), and they are flabbergasted that they have met with intelligent, factual and logical resistance.
Because they were not prepared for legitimate scientific and/or statistical arguments – or perhaps just because they never anticipated any kind of scrutiny, at all, even from other scientists – they never formulated any arguments to address legitimate questions/analyses of their work.
The result has been that they lash back with ad hominen attacks directed at anyone who has the audacity to question the research – or, Hell, anyone who even just dares to ask a question.
It’s a sad state of scientific affairs, but I do think (hope) that, in the end, sanity will prevail. Some of the CAGW hysterics will lose their jobs; some will continue in climate research with a new mandate of truthfulness; and some will just keep harping on the same subject until they die or the grant money dries up (which ever comes first).
Steve #35, regarding the error with the OHC that you bring up, one of the issues with determining the “non-radiative” term by substracting CERES measured fluxes from OHC changes is that any observational erros will automatically be bundled in with that non-radiative term. Since the “contributions” are determined by the size of the standard deviation, there doesn’t even need to be a trend bias in either CERES or Levitus, but any sort of month-to-month errors can bias that “non-radiative” term high. This probably was not much of an issue for Dessler when he was coming up with ratios of 20:1, but when corrections for his errors (partly highlighted by Paul_K above, and in Spencer’s post) bring that ratio down to 3:1 or 2:1, that error bias can become an issue.
[...] to calculate S(t), but instead simply tries to determine the standard deviations, as mentioned by Paul_K in a comment at the Air Vent. Eco World Content From Across The Internet. Featured on EcoPressed Just Add Water [...]
Just wish to say your article is as surprising. The clearness for your submit is just spectacular and that i could suppose you are an expert in this subject. Fine with your permission let me to seize your RSS feed to stay updated with approaching post. Thank you a million and please keep up the enjoyable work.