the Air Vent

Because the world needs another opinion

Updated Spencer Ocean Model

Posted by Jeff Id on July 2, 2011

Roy Spencer pointed out that his simple ocean model had an error of a factor of ten in the heat capacity of water.  It is a simple spreadsheet linked below that anyone can work with.

FOLLOWUP NOTE: The above spreadsheet has an error in the equations, which does not change the conclusions, but affects the physical consistency of the calculations. The heat capacity used for water is 10 times too low, and the diffusion coefficients are also 10x too low. Those errors cancel out. I will post a new spreadsheet when I get back to the office, as I am on travel now.

Paul K also noted the error claiming to have found an energy conservation error in the missing heat thread.

Jeff et al,
I pulled down Dr Spencer’s spreadsheet with a view to testing higher order integration, and discovered that there are two major errors in the spreadsheet, which probably make further conversation on his findings a bit useless, at least until he has had a chance to review and correct the errors, and modify his conclusions accordingly.
1) Dr Spencer noted (in an update) an error of a factor of about 10 in the heat capacity term, but argued that this was compensated for by a change in the heat diffusion term of the same order. In fact, the argument for compensatory errors is only valid for the calculations below the first layer, where the calculation involves only terms from interlayer heat-flow. The argument is not valid for the calculation of the temperature of the first layer. This critically includes the integration (over time) of the total heat flux due to radiative imbalance – expressed in the model in the form of : F(t) – lamda * DeltaT [F is the forcing, lamda is the feedback parameter and T is temperature] There is no compensatory mechanism for the error in heat capacity, and this introduces a substantial error in the first-layer temperature calculation.
2) The heat capacity term in the model for each layer is given by 50*418000/86400. It is not clear where these values come from, but it is easily confirmed that the final value from this expression is too small by a factor of about 10. I calculated it should be 2555 on the back of an envelope. However, the heat flow term out of layer 1 into layer 2 includes a factor of 41,800 for the layer 1 calculation and a factor of 418,000 for the layer 2 calculation of heat flow from layer 1 into layer 2, which causes the model to bust conservation of energy.

These two errors are sufficient for me to throw the towel in. Pity. I’m going to bed.

I fell asleep early and woke up at 2am so the house is quiet and I began reading the equations carefully.  It turns out that they are both right.  The spreadsheet cells from the air/water boundary contained the following equation:


I couldn’t figure out PaulK’s envelope calculation of 2555 but I did figure out that Roy’s number 86400 is the number of seconds in a day and $AJ$10 is the number of days in a timestep – 30 in this case. The other factor 418000 is supposed to be the number of joules to change the temperature of 1 cubic meter of sea water by 1 degree, in fact it should be about 4,180,000 although I found slightly lower references on the internet it is close enough.  Note the middle term though has 41,800 which is actually a factor of 100 off and definitely violates conservation of energy as Paul K states.

The diffusion factor does dominate this model though and that is what leaves me wondering how realistic any of this is in comparison to measurements of ocean energy exchange.  I’m an engineer not a climatologist after all.  I updated the above numbers, and modified the diffusion by a factor of 10 as Doc. Spencer suggested and calculated the following answer.  simple-forcing-feedback-ocean-heat-diffusion-model-v1.0-1 revised Jeff

You can see that the red line ends up matching the blue line very closely.   It would only take a slight tweak of  the diffusion coefficients to place them directly on top of each other.  In other words, correcting the problems results in not much change to the result.  What is interesting is that there is meaning in the diffusion coefficients and my diffusion coefficients are 10X higher than the original.  It seems to me that these energy diffusion numbers would be nailed down a little better than that, but I don’t really know.    That is the point of Roy’s demonstration though, nobody really knows and a good match to PCM1 only requires tweaks to the top diffusion layers above the thermocline to get a good match to observation and a completely different sensitivity.

Anyway, the diffusion per layer is shown in the following table:

Meters Depth Diffusion Coefficient Observations Diffusion Coeficient PCM1 Model
0 4.4 8
50 2.1 9
100 16 12.5
150 21 21
200 39 39
250 40 40
300 41 41
350 41 41
400 42 42
450 42 42
500 42 42
550 42 42
600 40 40
650 30 30
700 30 30
750 30 30
800 30 30
850 30 30
900 30 30
950 29 29
1000 28 28
1050 27 27
1100 23 23
1150 25 25
1200 24 24
1250 23 23
1300 22 22
1350 20 20
1400 20 20
1450 20 20

The diffusion coefficient is the number of times that the full energy capacity of the 50Meter thick layer exchanges with the above and below layers each month.  A value of 4 would mean that the exchange occurs once per week.  A value of 30 is once per day – interesting no?  I didn’t mess with the values at all, except to multiply by 10.   For Dr. Spencer to complete his model now, he’s going to have to mess with the values a little more to get the quality of match he used to have, but these are not far off.

72 Responses to “Updated Spencer Ocean Model”

  1. mrsean2k said

    Jeff (and possibly Roy) can I make an appeal that if you’re going to use Excel / spreadsheet of choice in this way, there are a few simple changes to the way you contruct them that will make this kind of error far easier to spot, and make maintenance and development of the spreadsheet easier to boot.

    Essentially, do not use numeric constants in your formulae!

    Rather, put your constant in a separate cell, labelled suitably, name that cell, and then refer to the cell name in any formula.

    This has a couple of effects:

    1) it becomes far easier to change the value of a constant when it is used multiple times. You change only the named cell’s value, and avoid, for instance, cut and paste errors when making a change.

    2) If you use a suitably descriptive cell name, it becomes self-evident what the purpose of the constant is, and lays bare assumptions to yourself and others.

    To take Jeff’s example,


    Create a cell labelled as SecsPerDay with the value of 86400, and JPerCubicMeterPerDeg with a value of 418000, and substitute:


    It doesn’t take much imagination to see that there are gains in readability and self-documentation for virtually no effort (coming up with a meaningful name usually takes most of the time)

    For any spreadsheet I use, I make it an aim to have no magic numbers – even seemingly “obvious” values like 24 for hours per day may be an incorrect assumption in the correct context.

  2. mrsean2k said

    Oh, and to add, if you use a named cell instead of magic number, you can instantly see which cells would be affected when a constant changes with the spreadsheets “Trace Precents” or equivalent function. Again, also invaluable for finding cut and paste and similar errors.

  3. Brian H said

    All good practice. Spreadsheets are subject to many such errors and abuses.

    Most, e.g., don’t have spellcheckers, so “Trace Precents” instead of “Trace Percents” might cause problems.


  4. mrsean2k said

    Finally, looking at the other absolute cell references that seem obvious on inspection, and replacing the absolute cell reference with a suitably named cell:


    I’ll shut up now.

  5. mrsean2k said

    @Brian H

    Ha, indeed!

    Although the tool in question is “Trace Precedents” in OO’s “Tools” -> “Detective” menu. I forget what the Excel equivalent is.

  6. Steve Fitzpatrick said

    The average shape of the thermocline (essentially exponential), a specified average upwelling rate (about 1.2 cm/day), and the difference in temperature between the abyss and the surface, specifies an average diffusion coefficient, The big discrepancy in “diffusion rate” between the near surface layers and the thermocline is because in the near surface regions there are additional mixing effects due to wave action but especially due to solar energy penetrating down to >100 meters in the open ocean and causing heating. The absorption of energy below the surface forces convective turnover (and a very thermally uniform well mixed surface layer) in the tropics and subtropics. At high latitudes the situation is more complicated (seasonal changes in surface layer).

    The gradual increase in diffusion rate with increasing depth along the thermocline makes sense because the buoyant stability of the thermocline (which inhibits eddy driven mixing) is proportional to the first derivative of the change in temperature… the first derivative of an exponential decay function is just another exponential decay function.

    My guess is that a model which simply uses 75-100 meters of uniform temperature surface layer and an exponentially declining diffusion constant with depth would match the data pretty well.

  7. Paul_K said

    Well done.
    2.a.m?? That’s sad. I think we have the same problem, because I was starting to tackle Dr Spencer’s spreadsheet at 2.a.m. this morning in my time zone.

  8. Paul_K said

    For completion, my heat capacity calculation was very simple.

    The top 50m of water has a thermal capacity corresponding to about 7 watt-years/m2/deg K. (Check Schwartz 2007, for example.) Dr S is integrating in time units of days, so the constant should be 7*365 watt-days/m2/deg K = 2555. Dr S’s value was around 250.

  9. Nic L said

    # 6 Steve Fitzpatrick
    “My guess is that a model which simply uses 75-100 meters of uniform temperature surface layer and an exponentially declining diffusion constant with depth would match the data pretty well.”

    Yes, I would expect so. Lindzen & Giannitsis showed in their 1998 JGR paper “On the climatic implications of volcanic cooling” that a 75m deep uniform temperature mixed-layer overlaying a 400m deep thermocline with a coefficient of eddy heat diffusivity of 1.5 x 10^-4 m^2/s (roughly in line with observational evidence), matches the original Hoffert upwelling-diffusion ocean model pretty well. [A 75m mixed-layer depth is a bit lower than most authors’ estimates, of 100m or so (the seasonal maximum depth is normally used), but that makes little difference on interannual and decadal timescales.]

    If one uses the standard diffusivity measure, the coefficient of eddy heat diffusivity, then no term for the specific heat capacity of water is required. I don’t know why Roy Spencer didn’t do so. Also, so far as I can see, he omitted to adjust for the fact that the forcings apply to the whole of the Earth’s surface but that the oceans only cover 71% of it.

  10. RB said

    A value of 4 would mean that the exchange occurs once per week. A value of 30 is once per day – interesting no?

    When Dr. Roy changes the data (PCM to observations), the diffusion coefficients change. I don’t have the statistical expertise of many of you, but to me that seems odd suggesting that these are curve-fitting parameters with no physical meaning.

  11. timetochooseagain said

    Ideally we should want to physically constrain all the numbers on which your fit depends, except the one you are trying to derive from the fit (ie the sensitivity/feedback). I think it is worth looking into what realistic diffusion coefficients for the real world would be, and how that would be determined.

    It has also occurred to me that we don’t really know what forcings PCM was run with. Unless it’s forcings happened to be the same as GISS uses, then the difference must have been made by the particular choice of diffusion coefficients. This makes knowing the real coefficients in the real world rather important, as they can have as big an effect as the choice of forcing.

  12. Paul_K said

    Nic L,
    “Also, so far as I can see, he omitted to adjust for the fact that the forcings apply to the whole of the Earth’s surface but that the oceans only cover 71% of it.”
    The assumption made by Dr Spencer is a common one, and one that does not seem unreasonable. The total net heat gain/loss by the planet is the integral of the radiative flux imbalance for the planet as a whole, not just the oceanic part. If energy is accumulating in the planet, and assuming that latent heat stays broadly constant, then on a simple thermal capacity basis, most of this accumulated energy must be stored in the form of ocean heat.

  13. timetochooseagain said

    It’s forcing in watts per meter squared, ie normalized to the earth’s surface area. I don’t see how one could have a problem dealing with area of the ocean surface unless one were taking the forcing and multiplying it by the Earth’s surface area to de-normalized it. Is that what was done? And should we really expect the energy going into land versus ocean to scale with area of their respective surfaces? After all, water conducts heat much better than dirt.

  14. kim said

    Convects better, too.

  15. timetochooseagain said

    14-I thought that was sufficiently obvious 😉

  16. kim said

    You were right. So obvious I noted it.

  17. Brian H said

    Kim and Time;
    Obviousness is unacceptable. Abstruseness only allowed.

    Govern yourselves accordingly.

  18. Brian H said

    Speaking of which, are not volcanoes simply localized and focused dirt convection?

  19. Geoff Sherrington said

    In the reverse direction, can anything be deduced about the ability of oceans to donate energy to the process of hurricane formation? When reading that hurricanes form over hot water (and that some maintain without much evidence that hotter oceans will create more hurricanes) I find it hard to envisage adequate oceanic energy flux. It’s 8.26 pm here.

  20. j ferguson said

    are aerosols dirt convection?

  21. j ferguson said

    sorry. I know aerosols are not the fluid, or gas.

  22. timetochooseagain said

    21-Gases are “fluids”, and convection actually isn’t exclusive to liquids and gases:

    On this point the wiki article on convection oddly contradicts itself: a Rheid is a solid that deforms by viscous flow, and the wiki article says that convection can occur in Rheids. However the very next sentence says convection cannot occur in solids. Clearly wiki is confused about this.

  23. RB said

    On the pit-falls of curve-fitting regarding some previous exercises, perhaps there will be another edition for the current exercise:

    Anyone who deals with numerical modeling knows that if you start using too many adjustable parameters, you can often make your model fit the data very well, but the parameters chosen for the model might not be physically meaningful. That is, there are often a number of distinct combinations of the parameters that would give about equally good results. So when scientists like me see Roy Spencer curve-fitting with four adjustable parameters, red flags go up right away.

  24. Jeff Id said

    RB,That looks like a Jeff quality screw up.

    I think the point here though is that only the top layer diffusion coefficients need to change to make the ocean fit observation. The observed warming corresponds to a lower sensitivity value. It would be worthwhile for me to look into the raw observational data to determine its quality because the model doesn’t really determine the sensitivity in this case.

  25. steve fitzpatrick said

    I agree that an ocean diffusion model based on physical argument would be more meaningful than a ‘curve fit’. In Roy’s case, I think the fitting of the different diffusion coefficients to match the measured ocean uptake profile is not so bad, since he isn’t implying physically unreasonable behavior. There really is diffusion of heat down the thermocline, and that is both widely recognized and widely accepted. The problem is a lack of knowledge of how the diffusion constant changes with depth and geographical location.

    I hope you appreciate that the use of vastly different assumed aerosol histories to make climate models more accurately hind-cast the measured temperature history is at least a questionable, and IMO much more questionable, as what Roy Spencer has done with ocean diffusion constants.

  26. RB said

    because the model doesn’t really determine the sensitivity in this case. I don’t really understand what you mean here.
    The observed warming corresponds to a lower sensitivity value.
    Chris Colose says that with Tom Wigley’s simple program, you can make the observed values correspond to a 3C sensitivity. Isaac Held says that while simple models yield 1.5C sensitivity, the models yield 3.4C sensitivity. To me, it seems like while simple models have their uses, particularly in developing some insights, determining sensitivity based on their fit doesn’t seem to be a useful exercise. This is probably due to
    In this case, a potential problem with a model like Eqn. 1 is that all the feedbacks are lumped together, but in reality different climate feedbacks operate on different timescales.

  27. RB said

    I hope you appreciate that the use of vastly different assumed aerosol histories to make climate models
    Yes, of course.

  28. steve fitzpatrick said

    Jeff Id,

    The diffusion coefficient is the number of times that the full energy capacity of the 50Meter thick layer exchanges with the above and below layers each month. A value of 4 would mean that the exchange occurs once per week. A value of 30 is once per day – interesting no?

    I don’t think that is right. Is it not the inverse of what you are saying; a bigger number means slower diffusive mixing?

  29. Papa Bear 38 said

    Your spreadsheet advice is good – I am in the process of converting several GBs worth of spreadsheets (mixed data and analysis) to named constants. After multiple nearly published (self inflicted) QC gaffes, I have seen the light.

    Thanks for reinforcing the lesson!!

  30. Kenneth Fritsch said

    “Anyone who deals with numerical modeling knows that if you start using too many adjustable parameters, you can often make your model fit the data very well, but the parameters chosen for the model might not be physically meaningful. That is, there are often a number of distinct combinations of the parameters that would give about equally good results. So when scientists like me see Roy Spencer curve-fitting with four adjustable parameters, red flags go up right away.”

    I would have to concur with this warning. I think what Spencer says about the IPCC attempting to minimize the differences between observed and model ocean diffusion and temperature gradient is true, but I think, by his fitting exercise, he only shows one of many possible combinations of diffusion coefficients and model sensitivity that might fit the observed curve reasonably well. Would not this discussion be better advanced by discussion how we think the models handle diffusion?

  31. RB said

    It looks like simple models such as these are useful for providing insights, but have to be used carefully. IPCC itself has several cautions in its report for their quantitative use but only as an extension to models. For instance, this seems appropriate in the context of our discussion.
    Sokolov and Stone (1998) show that when using a pure diffusion model to match the behaviour of different AOGCMs a wide range of diffusion coefficients is needed. The range here is much smaller because a 1-D upwelling diffusion model is used and changes in the strength of the thermohaline circulation are also accounted for.
    They also say
    It should be pointed out that the processes in the UD/EB model that determine the heat flux into the ocean are not necessarily physically realistic. Raper and Cubasch (1996) as well as Raper et al. (2001a) show that the net heat flux into the ocean in the UD/EB model can be tuned to match that in an AOGCM in several ways, using different sets of parameter values. Nevertheless, if the UD/EB model is carefully tuned to match the results of an AOGCM, and provided the extrapolations are not too far removed from the results used for tuning, the UD/EB model can be used to give reasonably reliable estimates of AOGCM temperature changes for different forcing scenarios. The thermal expansion results are less reliably reproduced because thermal expansion is related to the integrated heat flux into the ocean. Errors therefore tend to accumulate. In addition, the expansion depends on the distribution of warming in the ocean. Nonetheless, the simulation is adequate for comparison of scenarios. .
    So why use simple models?
    By using such simple models, differences between different scenarios can easily be seen without the obscuring effects of natural variability, or the similar variability that occurs in coupled AOGCMs (Harvey et al., 1997). Simple models also allow the effect of uncertainties in the climate sensitivity and the ocean heat uptake to be quantified.

  32. This thread demonstrates (at least to my satisfaction) that Roy Spencer is big enough to ‘fess up to mistakes. Likewise, Richard Lindzen speedily corrected errors in Lindzen & Choi (2009) that were picked up by the Kevin Trenberth.

    Can you imagine any member of the Hockey Team ever growing to the point that they could publicly admit error?

    If I am wrong about this, perhaps someone could point me to a retraction by Mike Mann following the numerous errors in his papers that have been painstakingly documented by Steve McIntyre. For example the process that produces hockey sticks from noise or the inverted Tiljander data. Interestingly, the inverted Tiljander data lives on in Kemp 2011:

  33. kim said

    And Loehle with the McKittrick corrections. There is resilience in skepticism which is lacking in the team effort. The need for immovable positions has led them to paralysis and sclerosis.

  34. RB said

    gallopingcamel #32,

    Are you talking about Dr. Spencer admitting a mistake in his recent spreadsheet or are you talking about how his multiple recent attempts to seek a low climate sensitivity have been a failure leading him to seek alternative outlets for his work.
    Ultimately I find enough evidence to virtually prove my theory, but now the research papers that I submit for publication are rejected outright….

    The climate modelers and their supporters in government are largely in control of the research funding, which means that most government contracts and grants go toward these investigators who support the party line on global warming. Sympathizers preside as editors overseeing what can and cannot be published in research journals. Now they even rule over several of our professional societies, organizations that should be promoting scientific curiosity no matter where it leads.

    In light of these developments, I have decided to take my message to the people.

    (from the book, The Great Global Warming Blunder)

  35. kim said

    And to blindness.

  36. timetochooseagain said

    34-“his multiple recent attempts to seek a low climate sensitivity have been a failure leading him to seek alternative outlets for his work.”

    RB, what evidence have you got that shows his “attempts to seek a low climate sensitivity” (an erroneous statement, Roy has been trying to find the actual sensitivity, and if it happens to be low, it happens to be) have been “a failure” other than the fact that reviewers, who are likely individuals quite entrenched on this issue, won’t allow his papers to even go forward? In point of fact, he has made a number of important breakthroughs in our understanding of the satellite radiation flux data from which we might try to infer feedback, and his findings strongly imply the feedback should be negative. He may not have proof (in science, who does?) but he hasn’t “failed” to find evidence for this.

    And if you don’t believe that good papers are prevented from making it into climate journals due to reviewer and editorial bias against saying, in effect, “it’s not as bad as we thought”, then explain to me what was wrong with Ross McKitrick’s papers that he had to finally publish in stats journals showing A) a key claim the IPCC made was a complete fabrication, and wrong and B) That socioeconomic contamination of the temperature data is not spurious.

    The papers:

    Click to access ac.preprint.pdf

    Click to access final_jesm_dec2010.formatted.pdf

    The documentation of the gatekeeping by the journals:

    Click to access gatekeeping_chapter.pdf

    Click to access response_to_ijoc.pdf

  37. cementafriend said

    I agree with mrsean @1 it is best to label constants and variables, list them separately and reference them in an equation. It is also important that one has dimensions correct. Engineers (should) always put the dimensions in an equation to check that the result has the correct dimensions and that where there is an addition or subtraction that the items have the same dimension. Engineers like to work with dimensionless numbers such as Reynolds, Nusselt etc to determine correlation relationships.
    I have not checked any of the working but it should be noted that heat capacity has the units of kJ/kg.K or in old units kcal/kg.K (cal/gm.C). The heat capacity at constant pressure of pure water is approximately 1.0 kcal/kg.K and this was used as a reference value for specific heat. The “Joule factor” is 4.186. Refering then to a cubic metre of water brings in a density which for water is approximately 1000 kg/m3. Thus a variable with the units of J/m3.K for pure water is approximately 418,6000.
    Seawater has both higher heat capacity and a higher density than pure water. It could be determined accurately but here it probably does not matter.
    In a post Willis Eschenbach ( mentions the use of slide rules which was a good discipline to get the order of magnitude of results. Computers will only compute figures of input and give an output which may or may not be correct. ie garbage in =garbage out.

  38. cementafriend said

    Apologies, had to rush out and did not check the typing. The heat capacity should of course be rounded 4,200,000 J/m3.k or say rounded further 4MJ/m3.K.
    An interesting dimensionless number is the Schmidt which is viscosity/density/diffusivity. and is used in mass transfer analogous to the Prandtl number heat capacity/viscosity/thermal conductivity for heat transfer.
    It should be noted that processes involving heat and mass transfer can be modeled by electrical circuits. The driver of heat transfer is temperature difference, The driver in an electric circuit is voltage difference. Thermal coductivity and diffusivity are analogous to electrical conductivity.
    There is no way that CO2 or any other gas can be a driver of change- wrong units.

  39. gallopingcamel said

    RB @34,
    I was thinking about the spreadsheet in this post and how it reflects on the character of individuals who admit fallibility (like Lindzen and Spencer) versus the infallible (Mann, Hansen, Trenberth, Schmidt, Santer et alii).

    However, the “Big Picture” is very interesting too and I look forward to Jeff giving that an airing in the near future.

  40. kim said

    Great Apes ponder
    The significant digit;
    Ivory, bamboo.

  41. Brian H said

    cementafriend said
    July 4, 2011 at 8:37 am | Reply w/ Link

    It should be noted that processes involving heat and mass transfer can be modeled by electrical circuits. The driver of heat transfer is temperature difference, The driver in an electric circuit is voltage difference. Thermal coductivity and diffusivity are analogous to electrical conductivity.
    There is no way that CO2 or any other gas can be a driver of change- wrong units.

    You’re getting at something very important here. Please elaborate. My intuition and “reasonableness checker” have been screaming to me about a disconnect in this regard for years.

  42. RB said

    We covered that here . In your world, there seem to be only voltage sources. In the real world, there are also current sources. The sun is equivalently a current source.

  43. Instead of thinking of the climate like a circuit, think of it like a bathtub. Water comes in at a certain rate from the spout. Water goes out the drain, and the rate depends on the area of the drain hole and the height of water above it. So when water first starts filling the tub, it’s coming in faster than it’s going out, but as the level of the water rises, so does the outflow rate. That way, there is some level of water where the inflow and the outflow rates are equal. This is the “equilibrium” or (more properly) “steady-state” condition. If you turn up the water input, the water level will go up until you reach a new steady state. If you clog the drain hole with hair, reducing the outflow rate, the same thing happens.

    In this analogy, the input of energy from the Sun is like the water coming from the faucet, and the size of the drain hole is controlled by greenhouse gases (among other things).

    I’ve seen a lot of electrical engineers who get hung up on climate science because some of the terminology is different, and so on. But instead of asking questions to sort out any misunderstandings, they immediately pronounce that all the climate scientists have made some really elementary mistakes. It’s kind of sad to see highly trained engineers being so full of hubris that they transform themselves into trailer court gurus about a subject they haven’t bothered to study in any depth.

  44. RB said

    Barry Bickmore #42,
    In the thread I linked to above, we discussed the blanket analogy and I also mentioned the leaky bucket analogy . The electrical circuit analogy work too to get a feel for the process involved, i.e., if you have a constant current source, with the current flowing through a resistor, the potential difference across the resistor increases as you increase the resistance. By thinking of the resistance value as rho*l where rho is the resistance per unit length, you can sort of get a feel for what is involved due to a constant lapse rate as well. As with all things, analogies should not be carried too far.

  45. Jeff Id said

    #28 Steve,

    I think I’ve got this one right. The diffusion coefficient is multiplied times the temperature differential. The bigger the coeffficient,the bigger the diffusion. Sorry for the days later reply.

    On the others discussion, the ‘current’ guys are right. If you have a network of 3 resistors on a voltage source and you increase the resistance of the middle node, the voltage at the node closest to the positive goes up. Voltage is electron density and oddly analogous to heat. Incidentally, that is why those who try and claim that CO2 won’t warm the earth are so flat-earth wrong. Various feedbacks may keep it small or maybe the IPCC is more clairvoyant than we thought, but the net from increasing CO2 is increased resistance to flow and absolutely without question, increasing heat.

    Whether that is a bad thing (or perhaps even a particularly noticeable one) is the real question. I’m more skeptical every day. Nic’s recent is quite a powerful post.

  46. Mark F said

    45. Jeff – you mean that CURRENT is electron density, analogous to heat; voltage is analogous to temperature (read voltage and temp DIFFERNTIALS).

  47. Jeff Id said


    Current is definitely not electron density – current is the flow of charge per time. The pressure electrons exhibit on each other corresponds to temperature or energy capacity.

    AGW is about resistance to flow. I will concede that my use of the word ‘heat’ was perhaps too loose and should read.

    “flow and absolutely without question, increasing temperature.”

  48. steve fitzpatrick said

    Jeff #45,

    Humm….That conflicts with my understanding of the buoyant stability of the thermocline. The stability (which must be overcome by eddy mixing) is proportional to the first derivative of the density as a function of depth (and density is mainly controlled by temperature). In places where the temperature change with depth is high (near the top of the thermocline), the resistance to eddy down-mixing should be high, while where the temperature change with depth is low or zero (near the bottom of the thermocline, at the transition to abyssal waters, which are essentially constant in temperature, and in the well mixed surface layer), the rate of eddy mixing should be high. Certainly there is no delta-T between the surface and 50-75 meters depth (in some places >150 meters!). I don’t see how perfectly uniform temperature is consistent with low diffusive mixing.

    I will look at this some more.

  49. RB said

    As I stated in an earlier discussion,
    the equivalent for rate of radiating heat energy or the heat flux, q=dQ/dt is equivalent to Current, i=dQ/dt where Q is the charge (sorry for the q’s which mean different things).
    Then, Fourier’s law
    dT/dz = q/k

    is equivalent to Ohm’s Law:
    where R is the resistance per unit length dz,
    Potential difference, dV is equivalent to temperature difference, dT and current, I (dq/dt of charge) is equivalent to dQ/dt of heat energy. Temperature gradient dT/dz is equivalent to the resistance per unit length.

    Another pet peeve I have against questionably trained electrical engineers on climate science is when they say that AGW warming is based on positive feedback while electrical engineers know that positive feedback leads to runaway instability – which is bull, but I got tired of that one.

  50. Jeff Id said


    The coefficients of diffusion above the thermocline are an order of magnitude lower than below it. IOW, less mixing until 50-150 meters when it ramps up radically. You are likely more familiar with the rates and literature on this topic as I’ve only spent a day or so reading. The measured Levitus data shows the amount of observed mixing per depth so the diffusion model is specifically based on matching that data.


    I have read the comments above from others and I think there is a conceptual gap in the way people are taking this model. I’ve spent several hours reading literature on it now as well as checking and rechecking equations. If the Levitus data is solid and if adjustment of the feedback parameter balanced by different diffusion coefficients cannot match the result, we have a fairly accurate estimate of true atmospheric forcing. I commented above to RB that the forcing is not determined by the model but rather by the data but I haven’t proven to myself that a situation where the data can be matched by different feedback and diffusion coefficients doesn’t exist. It does make conceptual sense that different adjustments would work because the adjustpment parameters do different things. This can also be understood from the different profiles between the modeled result and the observed temps as the observations simply have less trend in energy imbalance. I’m thinking about doing another post on the subject after I consider what is the best way to prove the result to myself and others.

  51. Jeff Id said


    “Another pet peeve I have against questionably trained electrical engineers on climate science is when they say that AGW warming is based on positive feedback while electrical engineers know that positive feedback leads to runaway instability – which is bull, but I got tired of that one.”

    It drove me nuts for a while too, but the terminology of climate science is to blame. It should be a positive effect on net feedback rather than a positive feedback or positive change in forcing (derivative) or something but once a term is adopted, it keeps being used.

  52. RB said

    In addition to the inadequacies of a purely diffusion model and the multiple solutions, I wonder if the fundamental limitations of simple linear models relating global energy imbalance to the global mean surface temperature as I described to NicL here also applies to this instance for deriving an equilibrium climate sensitivity value. Specifically, AOGCMs show an apparent nonlinear dependence, which I understood Isaac Held to be saying is because of using the global mean surface temperature in their energy balance relation. He says that there is a slow ocean response in the high latitudes which are most unstable to vertical mixing. Therefore, any formulation has to relate the global energy imbalance to a temperature surface field, not the global mean surface temperature that reflects the spatial variations. In Winton 2010 they introduce an efficacy factor to allow for better correlation of simple models with the GCMs.

    Neglecting Dr. Spencer’s fit to PCM, I return to my original speculation – I wonder if what is being captured in Dr. Spencer’s model is only the fast response (due to the lower equilibration time for simpler models), therefore better correlating his result (despite the neglect of relevant physics) with the TCR than the equilibrium climate sensitivity.

  53. RB said

    Perhaps the apparent nonlinear response of energy balance to temperature when using a global mean surface temperature is probably also equivalent to generating the higher-order terms that gave higher sensitivity in PaulK’s analysis.

  54. steve fitzpatrick said

    I remain puzzled. Above the thermocline, the temperature is essentially uniform with depth (at any single location, that is), which seems to me most consistent with rapid vertical mixing, not very slow mixing. The surface water is mainly mixed by convection (not shear induced eddies, as happens well below the surface), because of solar heating below the surface (the blue/violet/near UV sunlight penetrates quite deeply) means heat must be carried by convection to the surface to escape to the atmosphere. Like I said, I am puzzled. Mixing in the surface water (top 50-100 meters) almost certainly has to be very rapid, not very slow. I will do some digging.

  55. Mark T said

    #49: I don’t know any true electrical engineers that think positive feedback leads to runaway. Of course, back in the day, control theory was considered core knowledge whereas it has become elective education in many new prorams.


  56. Mark T @55,

    I’ve seen a number of electrical engineers (at least that’s what they claimed they were) who said just that.

  57. Mark T said

    Note that I was not referring to people that claim anything… but to those that I know to be true EEs. I have worked with hundreds and none have ever made such a boneheaded mistake. Control theory was required for all EE/ME/ChemE and probably a few other degrees at my alma mater which accounted for 80% of the thousand or so they awarded each year.

    The Wikipedia articles on feedback are just short of idiotic.


  58. Anonymous said

    Barry Bickmore @56

    All EEs understand that in the worst case, positive feedback instability can cause system self-destruction. Whether or not this is true of earth’s climate system is another matter.

    Climate scientists developed their notions of positive and negative feedback by borrowing heavily from the work of electrical engineer Hendrik Bode. So what if some EEs are curious as to how well Bode’s work was applied by non-engineers? As a geochemist, I doubt you are in a better position to evaluate EEs intrusion into climate science than are EEs are able to judge your intrusion into their territory. All parties are slightly out of their league here. Likewise, your choice of a hydraulic (bathtub) model over that of an electrical circuit almost appears to be a demonstration of someone criticizing something they do not really understand.

    Years ago I ‘programmed’ an analog computer to solve systems of differential equations – equations which represented non-electrical systems. Note that the analog computer is simply an easily configurable electrical circuit which after energizing provides the solutions to diff. eq. via direct measurement of the circuit response at various nodes. To your claim that the electrical circuit analogy is limited, I agree. I would add, perhaps it is limited by our ability to provide systems of differential equations which correctly describe the behavior of climate.

  59. Mark T said

    Nonsense. Stability has nothing to do with positive or negatve feedback. It is purely a function of pole location. In the z-domain (of which I am more than familiar,) – discrete time – this means all poles are within the unit circle. For example, given a simple difference equation

    y(n) = x(n) + a*y(n-1)

    the system is bounded input-bounded output stable for all |a| lt 1. For a gt 0, this equation is equivalent to what climate folks refer to as “positive” (further assuming n, the step size, results in a phase step below the cutoff region which is driven by a, i.e., lowpass.) The reverse is true for a lt 0 (it becomes “positive” for phases greater then the cutoff, i.e., highpass.)

    In either case, values of |a|~1 result in very large gains, but still a stable system. For |a|=1 the system is called marginally stable with a pole at DC (a=1) or pi (a=-1) where pi is half the sample rate.

    As a consequence, all passive systems are defined as at least marginally stable since you cannot feedback more than is available (which would imply no heat escapes the system, which by itself cannot be achieved, either, or |a| is strictly less than 1.) This does not mean things cannot get really hot, just that things will always settle if the input stops or is removed. An unstable system will continue to either grow exponentially or oscillate with an exponentially increasing amplitude irrespective of the initial input.

    And, please, don’t give me the microphone or audio amplifier analogy, those are three terminal devices, i.e., they are active. An apt analogy for the climate would require yet another, hidden, power source other than the sun (with infinte energy supply as well, otherwise once it reached its max, the system would no longer increase, either.)


  60. Mark T said

    RB, maybe there is more truth in what you said than I was willing to accept? Sigh..


  61. RB said

    All EEs understand that in the worst case, positive feedback instability can cause system self-destruction.

    The issue is with those EEs who insist that positive feedback in earth’s climate is impossible because it always implies runaway instability and proclaim it as gospel from their EE background.

  62. RB said

    I’m not sure he is strictly referring to unbounded growth – even for active devices, an oscillator is a positive feedback circuit but with an amplitude bounded by its power consumption and finite quality factor. And then there are the destructive cases too such as in semiconductor MOS devices, one could turn on parasitic devices and with sufficient positive feedback cause device breakdown. Even here, the issue as you similarly point out is the loop gain.

  63. Anonymous said


    Active amplification is not necessary.

    Flutter is an example of passive positive feedback that has been known to destroy airplanes and bridges. The velocity of a car on ice coupled with the water depth beneath the ice can result in resonant pressures that break otherwise safe ice.

    Where have I have claimed that positive feedback within the climate system is impossible without disastrous consequences? I am merely pointing out that undamped positive feedback can result in system destruction. Some EEs may have been confused or misunderstood. Big deal.

    Chill dude.

  64. Mark T said

    Chill nothing. Your understanding of instablity is just wrong, period. What you refer to is a resonance, a situation in which much, or most, of the energy stored gets fed back at a frequency that the structure cannot tolerate without coming apart. It is an “instability” only insofar as the materials cannot handle the gain resulting from the transfer function represented by the feedback equation. With materials (or construction) capable of handling it, there would be no issue.

    Stability is purely a function of the magnitude of what is being fed back. Since you cannot create energy, this is necessarily less than what is currently stored in any passive (natural) system. Active systems, e.g., those with a transistor, can add as much energy as their power supplies can provide. Beyond that they will clip (or burn up.)

    This does not say anything regarding the magnitude of the resulting output except that it will a) be bounded and b) decay to zero if the input is removed.

    This notion of tipping points and instabilities is just plain silly and needs to be corrected whenever and wherever possible. Oh, for the record, there is no such thing as EE control theory or climate control theory. There is only control theory which is based on analysis of how feedback systems function mathematically.


    PS: note that even if you accept Hansen’s moronic Venus theory you cannot deny that it is not running away anywhere, it is as close to constant as we can see in nature, i.e., it is bounded and thus, stable.

  65. Mark T said

    I was going to add that the closest thing to a naturally unstable system would be atomic level reactions (chemical, fission, etc.,) that use atomic bonds as an energy source, though “systems” tend to quickly burn themselves out as their source of energy dissipates, i.e., they decay into stable reactions rather quickly.

    That’s the problem with posting on a phone from a bar… alcohol and low battery decay into bad posting ability (bad typing, too.)


  66. Anonymous said


    “Stability has nothing to do with positive or negate feedback” – MarkT

    Non-linear flutter instability does not occur without positive feedback.

    “Chill nothing. Your understanding of instability is just wrong, period” – MarkT

    “With materials (or construction) capable of handling [resonance], there would be no issue.” -MarkT

    I wanted you to chill so you could coolly think about things. But I made your problem worse. Now that I understand you were drunk, I will ignore those comments.

    Finding poles on Bode plots and using time invariant linear transfer functions as used in EE linear control theory is fine for RLC circuits and active amplifiers. It runs into problems when analyzing non-linear systems. This is a mistake that no true EE would make. 🙂

    “even if you accept Hansen’s moronic Venus theory you cannot deny that it is not running away anywhere” – MarkT

    Time for entertainment…
    Hansen’s Venus theory has long been disproved because he missed the SO2 albedo cooling that ended up cancelling his proposed sulfate cloud (greenhouse) warming. Hansen compounded his error by panicking and thinking that SO2 scrubbers on coal plants would save Earth from burning up. But oddly, 30 years of cooling reversed. So then it looked like the scrubbers were the problem because temps were going up. And SO2 albedo was better understood. So, therefore Hansen wanted clean U.S. coal plants shut down while China’s prodigious SO2 plume temporarily saved us. The latest twist is that delaying the warming delays the cure, so China must clean up SO2 emissions.

    Prior to Hansen, Carl Sagan proposed that the additional heat on Venus was caused by water vapor feedback, which was wrong. However, Hansen drastically boosts his estimate of climate sensitivity with water vapor. What a great story if cloud albedo – Hansen’s Venus bugaboo – returns to disprove him once again.

  67. RB said

    Probably there are only a couple of people possibly paying attention at this point, so I’ll stop after this. I agree that positive feedback is linked to instability – linear or non-linear. The key is of course the magnitude of positive feedback which impacts whether oscillations are damped or growing. I’ll add my $2c with phase-locked loops (PLL) as an example.

    In the context of a linear phase-locked loop (PLL) as an example of a linear control loop, transient response to a step change in the input variable (such as phase or frequency) exhibits convergence to the new state with damped oscillations for mild positive feedback.

    In closed loop terms, for stability, one is interested in not just the loop gain magnitude but also the phase together defined in linear electrical circuits by the phase margin. A stable phase-locked loop converges to the new (phase or frequency) state, an unstable one loses lock.

    Nonlinear phase-locked loops use a looser definition of stability, where the convergence is not to a single state but to a bounded limit cycle.

  68. Alexander Harvey said

    Diffusive and upwelling diffusive oceans have fallen out of fashion but were once popular and well studied.

    At medium timescales (century) upwelling should not be ignored else one obtains differences to the non-upwelling results due to extra heating with depth that is countered by the upwelling.

    With regard to free parameters: this hazard can be reduced by additional constraints. Spectral properties of the temperature response & OHC combine to constrain the the parameters for the effective mixed layer depth and bulk diffusivity. The thermal capacity of water and upwelling rates are constrained by measurement.

    Sensitivity is not well constrained as it only comes to dominate at medium timescales in the diffusive model.

    A diffusive model can be converted into reponse functions, either a flux response to temperature or a temperature response to flux, by imposing either a temperature pulse or flux pulse on the model and recording the resultant history. These functions then yield flux or temperature histories by convolution with the opposing history.

    Such response functions can be analysed to provide appropriate statistical tests and spectral properties.

    The question of the effective depth of the mixed layer is best considered in terms of the effect that spatial averaging has on the resultant temperatue fluctuations. The temperature response is inverse to the thickness and this could result in the thin areas dominating the average response and the overall effect being not that different from a diffusive layer. The thin area repsonding quickly with a large amplitude, medium layers wih a longer response and a smaller amplitude, thick layers with little or no high frequency response. Here one might be guided better by the spectrum of the measured oceanic temperature anomalies and some hypothesis as to the nature of the flux “noise” component.

    The weakness with which sensitivity is constrained by the data is legendary and I doubt that such analyses will change this. However the same causes give rise to a useful result in that the difficulty in determining sensitivity from sub centenial temperature/forcing histories argues for it making little difference to sub centenial projections of temperature from forcing scenarios.

    Current strategic thinking extends to 2050 in most cases, and 40 years falls into a region were thermal responses are plausibly dominant given a diffusive model.

    FWIW the boundary flux/temperature relationships are tractable analytically for both the standard and upwelling cases, both with and without an effective mixed layer. The first is trival and the upwelling case, although less so, involves nothing more exotic than a Gamma function (as best as I can recall) which can be implemented in Excel or similar using the same algorithms as once used in programable calculators if the function is not built in.

    There are caveats in using a diffusive ocean e.g. due to the real origin of the eddy diffusion which I believe is not a general phenomenon but the result of localised mixing in areas prone to vertical disturbance, margins, ridges, etc.. Again the best guide may be the data as opposed to knowledge of the physical processes when delaing with averages such as global temperatures and OHC data.

    Whatever the caveats, diffusive models are part of the armory that needs to be applied as a reality check and are a marked improvement on both trivial models with no thermal mass and slab oceans. Particularly if one wishes to develop statistical methods for hypothesis testing.


  69. hemp said

    Thus this process with a moving fluid requires both diffusion and advection of heat a summed process that is generally called convection..Convection can be forced by movement of a fluid by means other than buoyancy forces for example a water pump in an automobile engine . This motion is associated with the fact that at any instant large numbers of molecules are moving collectively or as aggregates. Both of these convections either natural or forced can be internal or external because they are independent of each other. The or the average fluid temperature is a convenient reference point for evaluating properties related to convective heat transfer particularly in applications related to flow in pipes and ducts..For a visual experience of natural convection a glass that is full of hot water filled with red food dye may be placed inside a fish tank with cold clear water.

  70. Brian H said

    Re: hemp (Jul 19 23:53),
    Missing word? “The ??? or the average fluid temperature…”

  71. Heat Pump said

    Heat Pump…

    […]Updated Spencer Ocean Model « the Air Vent[…]…

  72. Excellent pieces. Keep posting such kind of information on your site.
    Im really impressed by it.
    Hey there, You have done a fantastic job. I will certainly digg it and individually suggest to my friends.

    I’m confident they will be benefited from this website.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: