## Models

Posted by Jeff Id on May 18, 2011

After Willis Eschenbach’s post on the linearity of climate model results, I’ve realized there is a need to explain why some of us find that a significant conclusion. Nick Stokes made the correct point that with enough parameters you can fit equations to most anything. My reply was that if the parameters are all linear the fit is far less likely. A lot of this is Greek to English blog readers.

I really did intend to quit blogging due to time constraints and I am avoiding necessary work but this is my relaxation time. Don’t tell my wife- she doesn’t read often. However, I cannot spend enough time to work through the equations of different aspects of models to demonstrate why I disagree with those who claim that these simple linear fits Willis demonstrated are expected or even that they should be expected. What I can do is provide a few directions for the interested and technically inclined such that others can work it out for themselves. Lets start with statements and answers. First, climate models are claimed to be based purely on physics. This is true except that our physics knowledge is limited requires a few assumptions.Climate models/scientists are claimed by uninformed to agree with each other. This is demonstrably false.

Climate models demonstrate warming due to CO2. This is true.

Climate models are useless junk. This is false although GIGO applies.

My point in the amazing match to Willis’s fit was that climate model results are way way too linear. Nobody expects convection instability to react linearly to heating — at least to my knowledge. Nobody would expect ocean temps, aerosols, cloud formation, condensation, ocean currents, ice melt, to simply increase linearly to forcings — right?

Maybe they do but it is news to me.

So Nick stokes pointed out that I’m too broad in my statements calling models linear. He’s right, they aren’t linear. Linear being an equal percent increase in response to any given input. Yet the non-linear components are so damned small that the whole global climate model can be represented by a linear equation with a few terms and near zero error. If you have any math wits, that is something interesting. Nick knows this IMO but likes to work the crowd.

I’ll narrow this down with a few words from the CAM documentation. CAM is a fine climate model with mainstream focus and excellent documentation.

As an example here is a discussion of the parametrization of the non-convective cloud processes.

The intro is below:

The parametrization of non-convective cloud processes in CAM 3.0 is described in Rasch and Kristjánsson [144] and Zhang et al. [200]. The original formulation is introduced in Rasch and Kristjánsson [144]. Revisions to the parameterization to deal more realistically with the treatment of the condensation and evaporation under forcing by large scale processes and changing cloud fraction are described in Zhang et al. [200]. The equations used in the formulation are discussed here. The papers contain a more thorough description of the formulation and a discussion of the impact on the model simulation.

The formulation for cloud condensate combines a representation for condensation and evaporation with a bulk microphysical parametrization closer to that used in cloud resolving models. The parametrization replaces the diagnosed liquid water path of CCM3 with evolution equations for two additional predicted variables: liquid and ice phase condensate. At one point during each time step, these are combined into a total condensate and partitioned according to temperature (as described in section 4.5.3), but elsewhere function as independent quantities. They are affected by both resolved (

e.g.advective) and unresolved (e.g.convective, turbulent) processes.Condensate can evaporate back into the environment or be converted to a precipitating form depending upon its in-cloud value and the forcing by other atmospheric processes. The precipitate may be a mixture of rain and snow, and is treated in diagnostic form,i.e.its time derivative has been neglected.

The parametrization calculates the condensation rate more consistently with the change in fractional cloudiness and in-cloud condensate than the previous CCM3 formulation. Changes in water vapor and heat in a grid volume are treated consistently with changes to cloud fraction and in-cloud condensate.Condensate can form prior to the onset of grid-box saturationand can require a significant length of time to convert (via the cloud microphysics) to a precipitable form. Thus a substantially wider range of variation in condensate amount than in the CCM3 is possible.The new parametrization adds significantly to the flexibility in the model and to the range of scientific problems that can be studied. This type of scheme is needed for quantitative treatment of scavenging of atmospheric trace constituents and cloud aqueous and surface chemistry. The addition of a more realistic condensate parametrization closely links the radiative properties of the clouds and their formation and dissipation. These processes must be treated for many problems of interest today (e.g. anthropogenic aerosol-climate interactions).

The parametrization has two components: 1) a macroscale component that describes the exchange of water substance between the condensate and the vapor phase and the associated temperature change arising from that phase change Zhang et al. [200]; and 2) a bulk microphysical component that controls the conversion from condensate to precipitate [144]. These components are discussed in the following two sections.

I have bolded two sections of the discussion. Before continuing it took less than 5 minutes to find an example in CAM which supports my contentions in previous threads I have alluded to the fact that models are stiff due to assumption and recently I’ve made the point as to why they are functionally linear.

First bold above states that the cloud values are not dependent on local Navier Stokes flow, saturation pressures or by any pure physics but rather are defined by expected cloud formation in certain conditions as defined by some papers. Nothing wrong with that except that you have to understand the papers themselves, the adaptation of the cloud formation and the eventual effect on the model. Black box in other words.

The second bold is very interesting in that the scientists are recognizing that climate model grids are so coarse that in previous models the entire grid block needed to achieve cloud formation conditions before clouds were even considered. How crude is that? I’m sure that graphs of climate model clouds demonstrated that such a large scale grid event was unlikely. In this improved model, the clouds can form before true cloud formation conditions exist. The point is to achieve greater ‘flexibility’ in the models without actually calculating the formation on a small enough grid scale to use actual physics to determine if condensation has occurred.

Is this method physically wrong — NO!

Is this method prone to potential error. HELL YES!

So for those ready to chuck the model to the curb, I don’t agree. For those ready to trust the model for anything other than a toy to be verified, I also don’t agree. The scientists wonders, how do we determine if the model is correct? Which assumptions are correct. In my opinion a true scientist wouldn’t trust a thing from climate models until the whole model is disassembled and verified section by section, engineering style. This is not what climate scientists have given us, instead we are lectured on the benefits of socialist society, reduced consumption, massive government and bovine scatology energy production. Not good folks.

If the study truly existed for a model or group of models and it were properly verified, I remain very open minded on the topic. It is not up to me whether the standard math in this model is correct, it is up to physics. We have no say. Pure linearity of response to forcings is not a good sign IMO.

Despite Neven’s hopes, I’m not stupid and I am not ignorant, I simply am not convinced. If those who say I should be convinced can answer my questions rather than censor them, they have a much better shot. This is where climate science fails – and dramatically.

—

Read the climate model link. Read to other sections — there are MANY examples like the above. Nick’s statement that models are simple Navier Stokes solutions are incomplete to the point that they are not correct. My (and others) point that Willis’s study of the linearity of climate models is shocking should resonate with the mathematically inclined and not in a good way.

Read and study. I believe 100% that climate models will be accurate enough for prediction and planning in the future. There is no question in my mind. Today, the process is corrupted by agenda, tomorrow it will change and like Galileo, the Earth will eventually orbit the Sun.

## Carrick said

I’ve made this point before Jeff ID, when you average over the entire Earth you are dramatically reducing the number of degrees of freedom in the resulting system. I’ve called these “bulk parameter” or “lump sum” models in the past.

For the global system, it’s not surprising, given that it is heavily forced (and the forcings are specified), that you can reconstruct the long term signal accurately using a simplified model that depends on these forcings.

The kicker though (a point made in the past by Lucia) is whether the parameter choices you’ve made are even physically realizable. For any given full 3-d climate model, the final

formmay be similar (or the same) as Willis’ model, but what you would find (if you were able to derive the 0-d version of the 3-d models analytically) is there areconstraintson the range of physically realized values.Obviously we can do better if we relax the assumption of physically realizable: In signal processing, you will generally get a better filter performance if the filter is acausal (I can actually use this principle for off-line analysis of signals). But in this case, the model needs to be constrained to be causal, not violate energy conservation, 2nd law of thermodynamics and so forth.

## Jeff Id said

Carrick,

Your point is well made and understood but I wonder if you find it unusual that the models have no detectable non-linearity to forcings over 120 year scales. My belief is that this linearity is surprising and we should expect deviations of slope at the endpoints of a warming/perturbed system.

## Oliver K. Manuel said

“First, climate models are claimed to be based purely on physics.”It is my understanding that thermodynamics claims the temperature change of an object depends on the difference between the net heat input and the net work output.

It is also my understand that most fields of science advance by observations that improve on earlier established facts, e.g.,

a.) William Herschel’s conclusion in the 1800s that food production is linked to cycles of solar activity [1], and

b.) Richard Carrington’s observation in September 1859 that Earth is immediately impacted by solar eruptions [1].

Which climate models took these historical facts into account or more recent evidence that Earth’s heat source is a very stormy, violently unstable star [1-3]?

More recent findings indicate that solar flares and “super-flares” from the pulsar in the Crab Nebula are probably events triggered by the same source of energy – neutron repulsion [3-5].

Which climate models consider the nature of the nuclear reactions that sustain the Sun and life itself as dynamic processes?

It appears to me that modern climate models and efforts to stop climate change are not based on physics.

With kind regards,

Oliver K. Manuel

Former NASA Principal

Investigator for Apollo

References:

1. Stuart Clark, “The Sun Kings: The Unexpected Tragedy of Richard Carrington and the Tale of How Modern Astronomy Began” [Princeton University Press, 2007] 211 pages

2. Curt Suplee, “The Sun: Living with the Stormy Star,” National Geographic Magazine (July 2004): http://ngm.nationalgeographic.com/ngm/0407/feature1/index.html

3. O. Manuel et al., “Super-fluidity in the solar interior: Implications for solar eruptions and climate,” Journal of Fusion Energy 21, 193-198 (2002): http://arxiv.org/pdf/astro-ph/0501441

4. O. Manuel, “Neutron Repulsion,” The APEIRON Journal, in press, 19 pages (2011): http://arxiv.org/pdf/1102.1499v1

5a. NASA Headquarters, “Fermi spots “superflares” in the Crab Nebula,” Astronomy (May 12, 2011)

http://www.astronomy.com/~/link.aspx?_id=0d3b6d3b-da6c-40fd-a22a-73f1d713a332

5b. Nancy Atkinson, “Crab Nebula Erupts in a Superflare,” Universe Today (May 11, 2011)

http://www.universetoday.com/85580/crab-nebula-erupts-in-a-superflare/

5c. NASA’s Chandra X-ray Observatory, “Crab nebula: The crab in action & the case of the dog that did not bark,” PhysOrg.com (May 16, 2011):

http://www.physorg.com/news/2011-05-crab-nebula-action-case-dog.html

## jstults said

FTFY.

Significant analytical simplifications happen before discretization; the governing equations are

notNavier-Stokes. Furthermore, they don’t even converge solutions tothat equation set. It’s all there in the docs and diagnostics; these discussions would be a lot better if folks would RTFM.## Nick Stokes said

Re: jstults (May 18 22:48),

I didn’t actually say that they were “simple Navier-Stokes solutions”. I think the statement referred to is where I said:

“

But why I called what you said completely wrong is that it says “the models are simple linear combinations …” etc. And they just aren’t. They are numerical Navier-Stokes solvers.>/i>”## Carrick said

Nick Stokes:

I would say their equations are based on Navier-Stokes, rather than being NS solvers.

A “real” NS solver is intractable for the full Earth, of course.

## Carrick said

Jeff:

See my comment to Nick…I think if they did solve the full NS equations, you probably would see differences. How important they are for global mean temperature derived from a climate model is not something I have any real intuition on, but it’s an interesting point.

## TimTheToolMan said

How do the forcings actually work in the models? For aerosols for example, is the amount of aerosol in a particular grid cell simply Total aerosol forcing * 1/(number of gridcells)

…or is the split up of that forcing arbitrarily (hopefully sensibly) allocated differently to each? So that, for example, there are more aerosols above major cities and volcanoes than over say grassland in winter?

I guess I’m leading to wondering whether the fact that a global temperature has dubious meaning but that pales in comparison as to whether a global forcing applied equally, globally has any meaning.

## Nick Stokes said

Carrick #6,

Well the full set includes the Navier-Stokes, though they omit the vertical velocity component, with good reasons. See this 1983 paper by Hansen et al – Table 1 p 612. Eqs T1 and T2 are Navier-Stokes.

## jstults said

Too bad those aren’t the equations they actually implement; they assume hydrostatic (even in the text of that Hansen paper, right at the top of that same page across from the Table you mention).

Try the latest CAM description; chapter 3; same thing. Why is it so difficult to admit they

don’t solve Navier-Stokes?## Nick Stokes said

Re: jstults (May 19 07:15),

Yes, they assume hydrostatic, which is related to the omission of vertical velocity. Which is to say that they assume no vertical force or momentum change. And for motion averaged over cells of order 100 km, that’s necessary. Updrafts must be modelled, not resolved.

But they do solve F=ma in the horizontal, and a constitutive equation. That’s Navier-Stokes.

## jstults said

Yes, the horizontal direction is really conservation of mass/momentum without the simplifying assumption, but they don’t actually

solve(in the sense of converging to a unique solution of the governing equations) that either ; – )So they don’t

solveNavier-Stokes, and they don’t solveNavier-Stokes, see?## Frank K. said

Sorry Nick Stokes…Jstults is correct. They are NOT solving the Navier-Stokes equations. Period.

They MAY* be solving an approximation of the conservation of momentum equations (along with continuity, energy, and other transport equations), but it’s clear that they don’t include a proper viscous diffusion term, although they include the ubiquitous F “dumping ground” term (it the right hand side term you use when you don’t feel like writing your equations down properly. That’s where the viscous terms are “hidden”). And what about turbulence? I’ll stop there…

Put it another way, if I solve the continuity, inviscid momentum, and energy equations, I say that I’m solving the Euler equations, not the inviscid Navier-Stokes equations.

* I say MAY in the above, because GISS doesn’t like to properly document their codes, and we really don’t exactly know what differential equations they are REALLY solving in codes like MODEL E. But that’s SOP and GISS…

BTW, here are the REAL Navier-Stokes equations.

## Bad Andrew said

Jeff,

That model is Jennifer Love Hewitt. Now my favorite climate model. ;)

andrew

## Peter said

Nick, Here is a simple non-mathematical summary: If it walks like a duck, and quacks like a duck,………….it’s a duck.

## Nick Stokes said

Frank K,

No, it’s not an Euler equation – there is definitely diffusion of momentum. CAM 3 has a section 3.1.14 on it.

I am actually very familiar with the Navier-Stokes equations – but I did read your link, noting in line 6:

“They may be used to model the weather”.And JS #12, no I don’t

see. Is your point that they don’t stop to converge exactly at each time step? No CFD program does – there is no point in doing so. You are stuck with spatial discretisation error, and there is nothing to gain by being purist about time.## Frank K. said

Nick Stokes said

May 19, 2011 at 9:43 am

“Frank K,

No, its not an Euler equation there is definitely diffusion of momentum. CAM 3 has a section 3.1.14 on it.”

Nick…I know they aren’t modeling the Euler equations. But they aren’t modeling the Navier-Stokes equations either (particularly in climate modeling). Look at the forms of the differential equations, please…

It hardly matters though. They are modeling the atmospheric momentum balance (or F=ma) under suitable simplifying assumptions, and the differential equations that result are the ones you cited in the CAM documentation (which are NOT the Navier-Stokes equations. Repeat…they are NOT the Navier-Stokes equations…Repeat…).

Of course, we could talk about the fact that when you include multiple phases (e.q. liquid droplets and vapor), and homogeneous mixtures of air, water vapor, aerosols, etc. then you’re no longing solving the Navier-Stokes equations either, but rather a simplified form of the extraordinarily complex, coupled system of mulitphase continuity, momentum, and energy equations…and you really don’t want to go there…

## Frank K. said

“And JS #12, no I dont see. Is your point that they dont stop to converge exactly at each time step? No CFD program does there is no point in doing so. You are stuck with spatial discretisation error, and there is nothing to gain by being purist about time.”

As someone who has been professionally involved with CFD for over 20 years, this statement makes no sense whatsoever…

If they don’t care about temporal or spatial errors, then what’s the point of doing any simulation at all…hey, let’s use a big time step, put lots of numerical dissipation in my scheme, and solve away. Life’s good!

## kim said

The Power Stroker

Cleaves the air a mighty blow.

Ebbing of the Force.

===========

## jstults said

No; iterative convergence of the time-step is not my point. Frank gets it. The “solutions” are not solutions; they are not grid independent. I’m not even interested in DNS or even LES. I know point-wise convergence is unattainable, but it’d be nice to see some grid convergence results for the widely integrated functionals like global mean surface temp.

## Frank K. said

Thanks Jstults. Let me take this discussion further by stating the three basic requirements of any numerical scheme for solving PDEs like the Navier-Stokes equations (for reference, see “Numerical Computation of Internal and External Flows”, Volume 1, 1990, by C. Hirsch). They are:

(1) Consistency

The numerical formulation must be shown to be consistent with the PDEs being solved as dx, dy, dz, and dt tend to zero. One can use Taylor series expansions on finite difference and finite volume schemes to demonstrate this, but for the spectral methods favored by weather and climate codes, it is less clear (one can, however, talk about the number of modes retained in the spectral basis functions).

(2) Stability

The numerical formulation must not permit errors to grow and amplify without bound. You can show stability of model equations using techniques like the Von Neumann stability analysis method, where errors are modeled using Fourier series and stability bounds can be determined (usually involving Courant-Fredrich-Lewy or CFL number constraints). The practical application here is to determine the maximum stable time step. Unfortunately, it is very difficult to extend this beyond simple model equations (or systems of linearized equations).

(3) Convergence

The numerical solution should approach the exact solution of the PDEs as the spatial increment and the time step tend to zero. This is different from consistency in that you can have a consistent formulation which does not approach the exact the solution (because it is unstable). However, Lax (1967) developed a theorem (called Lax’s Equivalence Theorem) which states:

If you have well-posed initial value problem with a consistent discretization, stability is the necessary and sufficient condition required for convergence.So the only way to prove convergence for the non-linear Navier-Stokes equations is to run a grid independence study to show that your solutions are stable at several (increasingly finer) grid resolutions and that the solutions are tending to a single, unique result. This is difficult enough with simple problems like flows over airfoils; you can only imagine the difficulties associated with proving convergence for GCMs.

## Carrick said

Nick Stokes:

Not really. But it may be Euler’s equation.

….then again it does appear you have an odd definition of what Navier-Stokes is (as I recall at one point you included the equation of state and conservation of momentum into your equation(s). Correct me if I’m wrong…)

I’m comfortable saying that the models solve approximate forms of NS, even though that is an overstatement in some cases, and they do a lot more than just solve F=ma in the horizontal, they also wavenumber filter to remove acoustics and gravity waves. That’s not the same thing as solving NS, which is pretty much intractable at this point for the full atmosphere.

I would never say that I solve NS in my work, even though I start with NS and my only “sin” is dropping nonlinear terms that are negligible in the problem space that I’m studying.

Also T1 and T2 in your link doesn’t include an explicit dissipation term, which is the only thing that distinguishes NS from Euler’s equation. It’s interesting that even your CESM link calls refers to the equations s “an Eulerian core,” not a Navier-Stokes form.

## curious said

Open question – on CA (and by others elsewhere – John Pittman here maybe?)I’ve seen Gerald Browning and Pat Frank refering to use of a hyperviscous atmosphere to get the models to give in-range results. I’ve not gone back into the CA archive to track it down but, possibly related to this, is something that has been on my mind since Anastassia’s posts prompted the “where do winds come from” debate – does anybody have any references on the earth/atmosphere mass dynamics induced by the fact the earth is (approx) a spinning oblate spheroid with non uniform surface roughness inside a gravitationally retained atmosphere? I have googled around for it in the past and not found anything. Any comments, leads or references appreciated. Thanks

## Frank K. said

Carrick said

May 19, 2011 at 2:28 pm

“Its interesting that even your CESM link calls refers to the equations s an Eulerian core, not a Navier-Stokes form.”

I think this refers to the fact that the equations are formulated in the traditional Eulerian form (using Reynolds Transport Theorem) versus a Lagrangian form (which is used for things like particle tracking).

## jstults said

Frank K.: yes; the “Eulerian Dynamical Core” mentioned in the docs (chapter 3) is to distinguish the different treatments of the vertical direction.

Hydrostatic may be assumed “for good reason”, but it does cause numerical difficulties, and raises questions of well-posedness in the vanishing time/space step limit (“In practice, the solutions generated by solving the above equations are excessively noisy. This

problem appears to arise from aliasing problems in the hydrostatic equation”, that’s right, chapter 3 again).

This is related to Browning’s criticisms. I don’t think this is news to the modelers, but it probably is to uncritical cheerleaders of The Science all over the internet (in propaganda they’re called “useful idiots”, but I prefer “uncritical cheerleaders” because it’s less pejorative).

## Craig Loehle said

I have been arguing that the GCMs are physics in the same sense as “truthiness”–they are sort of based on the truth, but in many critical places (like clouds) there either is no basic physics or it must be abandoned and fudged due to computing limitations. There is some empirical understanding of clouds, but not sufficient for the GCM purpose, so they are kludged in. Yet clouds have a huge impact on the system behavior (perhaps, as Spencer posits, even being able to produce short-term oscillations like the el nino). The kludge takes out the possibility of Spencer’s self-oscillations and other nonlinear behaviors and makes the behavior very linear.

## Craig Loehle said

To clarify my comment 26, in a true turbulent system with slow ocean currents, I would expect at least some degree of ringing–a sudden perturbation like a volcano should create oscillations at some time scale in the ocean currents/air circulation. Ecological systems can develop wave behaviors for example. Various theories have been developed for just this mechanism to explain the Dansgaard-Oschger oscillations of 1500 yrs and sudden glacial changes. But the simplifications and boundary energy/mass conservation code in a GCM prevents such ringing.

## Craig Loehle said

And in fact, the el nino/la nina looks just like the ringing I am suggesting should occur, and that the models do a terrible job of matching.

## Carrick said

Frank K:

Yes, I think you are correct on this. Thanks for pointing that out.

Here’s a write up of a “semi-Lagrangian dynamical core,” so clearly this refers to the formulation, and not to one particular equation making up the “dynamical core” of the model.

## Nick Stokes said

Carrick #22,

“Not really. But it may be Euler’s equation.”No, as I said above, the key difference between N-S and Euler is the diffusion of momentum. And GCM’s have that. Annoyingly, they’ve buried it in the F at the end of the momentum equation, and make it hard to find.

The Navier-Stokes equations have an “Eulerian core”. Here they are referring to the computation sequence – the distinction between prognostic and diagnostic variables. The Eulerian core includes the acoustic waves, which have to be attended to, even though they can filter out the highest frequencies. They are the fastest thing happening. The diffusion can wait.

You do have to include the equation of state with this formulation of the N-S. Conservation of mass gives a rate of change of density, but to relate that to the momentum equation, you have to convert that to a rate of pressure change. So I should have included T4 in that list.

## Nick Stokes said

Re: Frank K. (May 19 13:56),

“

The numerical formulation must be shown to be consistent with the PDEs being solved as dx, dy, dz, and dt tend to zero”Frank, you might like to draw on your 20 years of CFD experience to come up with a paper that does that in conjunction with solving a practical macroscale problem. As JStults says (#22), it can’t be done, once you have sub-grid modelling, as with turbulence. But planes still fly.

And I did, in my brief observation that started this, say:

“But why I called what you said completely wrong is that it says “the models are simple linear combinations …” etc. And they just aren’t. They are

numerical Navier-Stokessolvers.”And the statement that I was contrasting it with said, expanded:

“The result demonstrates that the models are simple linear combinations of assumed forcings. The heart of a top global warming model is exposed in these posts.”Which version do you prefer?

JS #25:

“Hydrostatic may be assumed “for good reason”, but it does cause numerical difficulties”Well, it avoids a bigger one. As Hansen says, the CFL condition associated with vertical acoustic waves would be impossible. A time step of a few seconds.

## Jeff Id said

Nick, you are correct that my statements were too strong. However, what I think the obvious intent of my statement is to say the linearity is extraordinarily and surprisingly dominant. If the result of the model is linear then my ‘incorrectness’ becomes semantics and poor wording.

Aren’t you the least bit surprised that such a simple equation can represent all the flow forcing feedbacks of a global climate model?

## Nick Stokes said

Jeff, I wasn’t intending to pile on about your statement – just expolaining the context of mine, which has caused animated discussion.

No, I’m not really surprised. I’d go back to the water out of the pipe analogy. Complex equations, but underlying conservation principles.

Incidentally, AGW folk are more often defending the other side of this – how can something as simple as a climate sensitivity emerge from atmospheric chaos? Willis’ post shows the answer.

## jstults said

This is why it’s hard to say, “it’s not an NS solver”, borrowing credibility is easier when you don’t say that.

Huh? They are not referring to a “core of the Navier-Stokes equations”, they really are referring, as Frank pointed out, to the reference frame.

From the Finite Volume section,

This document describes the Finite-Volume (FV) dynamical core that was initially developed and used at the NASA Data Assimilation Office (DAO) for data assimilation, numerical weather predictions, and climate simulations. The finite-volume discretization is local and entirely in physical space. The horizontal discretization is based on a conservative “flux-form semi- Lagrangian” scheme described by Lin and Rood [1996] (hereafter LR96) and Lin and Rood [1997] (hereafter LR97). The vertical discretization can be best described as Lagrangian with a conservative re-mapping, which essentially makes it quasi-Lagrangian.From the Spectral Element section:

HOMME represents a large change in the horizontal grid as compared to the other dynamical cores in CAM. Almost all other aspects of HOMME are based on a combination of well-tested ap- proaches from the Eulerian and FV dynamical cores. For tracer advection, HOMME is modeled as closely as possible on the FV core. It uses the same conservation form of the transport equa- tion and the same vertically Lagrangian discretization [Lin, 2004]. The HOMME dynamics are modeled as closely as possible on Eulerian core. They share the same vertical coordinate, vertical discretization, hyper-viscosity based horizontal diffusion, top-of-model dissipation, and solve the same moist hydrostatic equations.From the Eulerian Dynamical core section:

The hybrid vertical coordinate that has been implemented in CAM 5.0 is described in this u section. The hybrid coordinate was developed by Simmons and Str ̈ fing [1981] in order to provide a general framework for a vertical coordinate which is terrain following at the Earth’s surface, but reduces to a pressure coordinate at some point above the surface. The hybrid coordinate is more general in concept than the modified σ scheme of Sangster [1960], which is used in the GFDL SKYHI model. However, the hybrid coordinate is normally specified in such a way that the two coordinates are identical.From the Semi-Lagrangian section:

The semi-Lagrangian dynamical core adopts the same hybrid vertical coordinate (η) as the Eulerian core defined by p(η, ps ) = A(η)po + B(η)ps , (3.393) where p is pressure, ps is surface pressure, and po is a specified constant reference pressure. The coefficients A and B specify the actual coordinate used. As mentioned by Simmons and Burridge [1981] and implemented by Simmons and Str ̈ fing [1981] and Simmons and Str ̈ fing [1983], the u u coefficients A and B are defined only at the discrete model levels. This has implications in the continuity equation development which follows.Some of the dynamic core schemes are a lot like ALE (arbitrary Lagrangian Eulerian) codes, or for a CFD’er, like the way the old guys did shock fitting with one of the coordinates defined implicitly in terms of the jump conditions.

Re: CFL limitations. That’s why we use implicit schemes or integrating factors, or residual smoothing, or…

Now I see why Mosher has so much fun with activists who won’t read the emails. Say something else that’s so easily refuted by a document I’ve linked three times on this thread already ; – )

## Nick Stokes said

JS,

Yes, I agree that Eulerian probably refers to the grid (ie not Lagrangian).

On CFL, no, I don’t believe implicit treatment would avoid the CFL issue. The basic problem – element aspect ratio, would give a very ill-conditioned matrix.

## Shub Niggurath said

Nick Stokes,

If you are not surprised, could you please tell me the reason to perform complex modeling?

## Jeff Id said

Shub,

They are discussing the nuance of valid methodologies, what is being missed is what Craig Loehle brought up and the point of the thread, the sub-grid estimations which sometimes are as opaque as Mann08 RegEM. Don’t chuck the concept of proper modeling out with what we are being told is proper modeling.

## jstults said

You’re right, the high aspect ratio cells can be problematic. One of the cool things I noticed about CAM5 is the spectral element scheme that works for arbitrary unstructured quads (seems they have a cubed sphere implemented). This sort of thing opens the door to better grids, and governing equations that are well-posed at high resolution.

## Nick Stokes said

Shub #36,

“please tell me the reason to perform complex modeling”The reason is that it is physics-based. If you fit a linear function which takes forcings over a period to observations of GMST, then that’s all you have. A fit to GMST for that period. If you have a physics-based model, you have a good reason to believe that it will do as well in the future as it did in the period you studied. And you can vary properties – eg GHG conc. And get a huge range of other output variables.

## Frank K. said

Nick Stokes said

May 19, 2011 at 8:16 pm

Re: Frank K. (May 19 13:56),

“The numerical formulation must be shown to be consistent with the PDEs being solved as dx, dy, dz, and dt tend to zero”

“Frank, you might like to draw on your 20 years of CFD experience to come up with a paper that does that in conjunction with solving a practical macroscale problem. As JStults says (#22), it can’t be done, once you have sub-grid modelling, as with turbulence. But planes still fly.”

Actually Nick consistency is easy to show for finite difference discretizations. By substituting Taylor series expansions into your numerical discretization, you can isolate the higher order error terms and show that they vanish as dx, dy, dz, dt tend to zero, yielding your original differential equation. It has nothing to do with solving macroscale or microscale problems. You can consult the C. Hirsch book I cited previously for more details.

What I was criticizing before was your statement: “Is your point that they don’t stop to converge exactly at each time step? No CFD program does – there is no point in doing so. You are stuck with spatial discretisation error, and there is nothing to gain by being purist about time.”

You appear to have a cavalier attitude towards numerical error and stability, but maybe I was wrong about that. In fact, most CFD developers are keenly interested in errors, stability, convergence, and (in the end) accuracy. But accuracy can’t be assessed until you know that your numerical solution is “converged.” And convergence can’t be assessed until you have a handle on the errors, which you can get from things like mesh and time step independence studies.

## Frank K. said

More on consistency here…

http://www.ecmwf.int/newsevents/training/rcourse_notes/NUMERICAL_METHODS/NUMERICAL_METHODS/Numerical_methods2.html

## Shub Niggurath said

Nick

Your answer is not addressing the core of the question I asked.

It looks like one can do all the things you say even with the simple equation as well.

Jeff

I am not chucking the concept of modeling. And I think you are being overly defensive about it yourself! No we dont need to chuck it. Lets keep doing it. This is not an excuse to ‘bash modeling’ etc etc.

Let me use a (shamelessly) borrowed example from Robert Laughlin’s book: A Different Universe

Imagine a plane flying mid-air. And you are an ignorant outsider observing this movement from afar. Asked the question, “what is the plane doing?”, you give a simple linear equation that describes the straight-line motion of the plane. On numerous flights, this equation is checked and is found more than reasonably correct. Because you are ignorant, you don’t know anything about the fact that the apparent simple straight-line motion of the plane is just a sum emergent property of many constantly varying input factors and powerful feedback mechanisms – variable fuel combustion, gyroscopic stabilizations, autopilot course correction etc.

If the observer shifts his perspective (from afar) to a much closer distance, he would discover perhaps, that the apparent straight line of flight is actually composed of an enormous number of small jagged zigzags going this way and that.

Now if someone asks me: “what is the climate doing?” and I answer with Willis’ formula, am I wrong?

If someone asks me: “what is the plane doing?”, do I need to build a computer model, incorporating all the wing and rudder controls, autopilot software, gyroscopes, jet engines etc in order to answer the question?

If someone askes me: “What is the climate doing?” do I need to start building a model of the climate system in order to answer the question? No I don’t.

Indeed in the real-world, if you start applying multi-composite formulae to estimate known and studied processes whose emergent properties are always collapsed to give utterly predictable outcomes, people will consider you insane.

Even at the centennial scale, it appears that we are already at a fairly afar perspective – the supposedly numerous well-understood and completely unknown chaotic factors work against each other. The conclusion is inevitable: climate is linear for practical purposes. The chaos of it makes no difference.

## jstults said

And it will always look that way…until you compare the two approaches on out-of-sample data.

## Jeff Id said

Shub,

Sorry, I didn’t understand your point from one sentence. I agree with your last comment.

On the main issue, the fact that the modeled climate system is completely uninteresting in characteristic ought to give clues as to whether it is functional or not.

As Nick quoted VonNeumann:

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

What would VonNeumann say about a climate model with hundreds of parameters iteratively stepped through time that can only fit a line?

What does it mean if models are accurate, can we say we don’t have any albedo tipping points or accelerations showing up? No strong non-linear convective feedbacks from thermal instability in the atmosphere. No hugely increased storm formation removing energy from the system? Just a linear increase? All that extra energy and the Earth responds linearly to forcing.

Stefan Boltzmann says that for a black body we should expect a forth order relationship to power in. So we should be fourth order to albedo and solar influence. I guess albedo change due to land use is a minor factor. Of course that is radiation in vs out. With atmospheric forcing due to composition changes, we are talking about conduction and convection changes for heat removal. There really isn’t more heat to remove, the heat is simply held longer in the lower atmosphere before release. Conduction of heat through air is so small it is a non-factor (or should be). Atmospheric/oceanic heat is transported by water condensation, flow and convection. Apparently, these when combined with albedo change are linear processes when taken in bulk- I’m definitely surprised.

## John F. Pittman said

Curious #23, On the exp growth of error on CA and on RC, we discussed several aspects of Browning and Kreiss. Jerry was part as well IIRC Pat and Frank and Craig. In one of the posts it was pointed out that Model E used hyperviscosity to prevent a grid mass and energy balance from going negative. Dr. Lohle comment was basically he was used to kludgey Eco programs, but negative mass!??!

Actually Gavin was giving out about as good as he was getting from an engineering view. Engineers know that assumptions and simplifications can give useful, not perfect or correct answers. But I think at the point we began discussing what these simplifications and assumptions resulted in and how the models were constrained to keep the model from blowing up, left all of us gobsmakced. The problem, as on this thread, is that the claims of what they do and what the models are actually doing can’t be resolved. The lack of resolution is that without verifiaction and validation they just cannot be accepted by an engineer, because one would not be certain that they worked well enough to be useful in their current unverified, unvalidated condition.

No amount of discussion will change those such as myself who know you have to validate and verifiy if you want the plane to fly correctly, and those who chose to ignore that yes we do have models for a very simplified, constrained system compared to the earth, air, water system, that can design wings because of the IV&V that was done in the past.

## Ruhroh said

@39, Nick wrote;

” If you have a physics-based model, you have a good reason to believe that it will do as well in the future as it did in the period you studied. And you can vary properties – eg GHG conc. And get a huge range of other output variables. ”

Three questions arise for me. Regarding ” it will do as well in the future as it did in the period you studied”, this seems to be a very weak form of validation of a complex model, against a single ‘ground truth’ experiment. How well did it do at predicting anything nonlinear in the studied period?

Regarding ” And you can vary properties – eg GHG conc.”

As I understand it, the GHG conc has been varied to create the temperature curve which Willis fitted linearly. Have the owner-operators of the GCM done this work and found non-linearities or fork points?

Regarding ” And get a huge range of other output variables.”

I assume you are not saying the other output variables have huge ranges, [usually when I see kiloamps and megavolts, I know something is wrong with my physics-based sims]. Of the huge range of predicted outputs, could you point to a few

( i.e., rainfall changes, ozone hole size,etc. ) that have been thus gotten?

It is my impression that the various GCM cannot satisfactorily quantify the poorly-named ‘climate feedback factor’. Is that not a simple scale factor which remains unresolved to date?

RR

## jstults said

take a look at these pdfs and vids to get an idea of what Nick Stokes is talking about: CESM tutorials (unfortunately requires registration, and the video files are ridiculously large)

I think he means you get vector output fields rather than single scalars (not talking about the magnitude of the scalars).

## Shub Niggurath said

Jeff,

Thanks for your reply.

When you said “I did not understand your point from one sentence” do you mean you did not understand anything at all right from the first sentence?

I’ll reframe and repeat my overall question: Why would I be interested in building a computer model of an elephant if I can take four parameters and make the elephant? In other words, the computer model of the elephant is interesting only inasmuch it approximates and behaves like the real thing anyway.

## Jeff Id said

Shub,

I’m not disagreeing, I just didn’t understand.

## Ruhroh said

This thread seems to be a fine microcosm of this paper,

“Seductive Simulations?”

http://sciencepolicy.colorado.edu/admin/publication_files/resource-1891-2005.49.pdf

by an Anthropologist (!) embedded in the U.S. NCAR ‘tribe’ for 6 years (!).

Many fine candid quotes from modelers, and the empiricists who doubt them…

From the abstract;

“This case study also challenges the assumption that knowledge producers always are the best judges of the accuracy of their models. Drawing on participant observation and interviews with climate modelers and the atmospheric scientists with whom they interact, the study discusses how modelers, and to some extent knowledge producers in general, are sometimes less able than some users to identify shortcomings of their models.”

RR

## Neil said

I still don’t get what you guys are talking about, Jennifer Love Hewitt is definitely not linear :)

REPLY: Thank god for that.

## Ruhroh said

@47 Jstults;

Thanx for that link.

Flogging my way through the vast linked stuff, in search of keywords ‘validate’ or ‘verify’.

Found these issues listed on the Software Engineering Group plan, Executive Summary;

http://www.cesm.ucar.edu/working_groups/Software/plan2000-2005/

“Code has limited modularity, so it is not very extensible. Little software is reused among CCSM components.

Scientists manage software engineers, though they often do not have expertise or interest in modern software languages, practices, and tools.

Few software engineering procedures are documented and systematically followed.

The size of the software engineering staff is inadequate to support the maintenance, development, usage, and support of a community model.

It is difficult to hire technical staff for a variety of reasons. There are limited opportunities for advancement, salaries are considerably below those in the commercial market, and many software engineers are not trained in or willing to work in Fortran.

It takes time for new people to be productive since practices are not documented, code is not very modular, and most code is not written to specifications.”

Admittedly, this is the ’5-year plan’ from 2000; however, it is the most recent plan on that nifty website.

Apparently the Execs didn’t yet implement these recommendations;

“We suggest that updated versions of this plan be released on a yearly basis.

We strongly encourage a review of the CCSM software engineering status and this document by independent outside consultants such as those available through the Software Engineering Institute (SEI). ”

Note that this plan was written by an NCAR insider…

RR

## jstults said

Ruhroh, you’re welcome.

Some links to V&V lit for climate models that I’ve come across is collected in this thread; please pardon the mess, it’s an open notebook.

## Shub Niggurath said

From Lahsen’s paper. ROFL.

## M. Simon said

Loved the model. I believe curve fitting by braille techniques is in order. Hands on so to speak.

## M. Simon said

“please tell me the reason to perform complex modeling”The reason is that it is physics-based.

But in actuality they are in part physics based and in part estimation based. As has been pointed out they do not do the physics of clouds. And if Svensmark is correct there is a cosmic ray/cloud hole. Of course that would require predicting cosmic rays. Good luck with that.

And here is a biggie. They do not predict volcanism. When and where and how much will the next important volcanic eruption be? If we could nail that down perhaps we could predict earth quakes. Very handy that. But we can’t. Yet.

So how much will a given amount of CO2 heat the Earth? Depends on water vapor amplification/damping. But models don’t do that from basic principles. They do it with parameters. Kind of like the way weather forecasting used to be done: look for similar patterns and hope they apply.

The book “Bodyguard of Lies” covers that well when discussing how the D-Day weather predictions were made. Weather prediction in 1944 was pattern matching. And it was mostly seat of the pants. i.e. agreement on how a pattern might evolve was hotly contested.

Models are pretty good with basics. Add CO2 – the atmosphere warms up. By how much? Well that is a bone of contention.

## M. Simon said

If someone asks me: “what is the plane doing?”, do I need to build a computer model, incorporating all the wing and rudder controls, autopilot software, gyroscopes, jet engines etc in order to answer the question?In fact that is what we do when testing aircraft these days. We simulate the parts not in a particular black box and see how the black box responds. As development goes along we put in more black boxes and reduce the amount of simulation code. Until it is all black boxes and fuselage/wings and no simulations.

Then we fly it to see if we got it right and amend our simulations accordingly – because from time to time we design and build replacement black boxes. And it is cheaper to do the initial testing in a simulator rather than a flying aircraft.

## M. Simon said

usually when I see kiloamps and megavolts, I know something is wrong with my physics-based simsYou are working on insufficiently large devices. ;-)

## Brian H said

Willis’ linear equation is a hell of a lot cheaper to run than the models.

And what it says is that all the internal mathematical complexities are a wash, they’re self-cancelling. Undeniably, on the face.

THEREFORE, they are un-physical. Nowhere and nowhen does the atmosphere or climate actually behave like that. As the virtually zero “skill” of the models in tracking real-time weather and climate demonstrates.

## curious said

45 John – thanks for the follow up. Please can I give the bit about the relative motion of the planet and its atmosphere a bump? If you or anyone have any comments on this I’d be grateful – is it significant? If, so do the models handle it? If they do, does viscosity matter? Thanks

## John F. Pittman said

Curious, I looked at several aspects of hyperviscosity. One of my favourites was a made up alien world where they were testing the scalability to region analysis. You can give a bit, as far as I am concerned. If you need permission, I beleive that tAV requires only honest discussion.

On bump or other phenomena, such as clouds, most of the models have simplifying assumptions such that several of these do not “need” to be included. The problem I had was along the lines of the claims that they had the physics correct, when the use of hyperviscosity was commmon. The other part as posted here is just how do they handle N-S. According to Browning, Kreiss, and Gravel, inadequately would be the best description.

One cannot claim to have correct physics if one has to use hyperviscosity. The other problem is that if you look where hyperviscosity was developed, it was developed with the ability to test as indicated in M. Simon #57, often in semi-infinite space. This is a boundary condition that helps make the equations constrained in a tractable form with the assumption of psuedo equilibrium. An engineer appreciates such. I don’t think physicists necessarily agree.

## M. Simon said

For those unfamiliar with Browning, Kreiss, and Gravel some links.

http://climateaudit.org/2006/05/15/gerry-browning-numerical-climate-models/

http://climateaudit.org/2007/02/11/exponential-growth-in-physical-systems/

http://journals.ametsoc.org/doi/pdf/10.1175/JAS3929.1

## Alexander Harvey said

How much non-linearity are people looking for?

How much non-linearity could this method hide?

Over the narrow range of forcings/temperature variation what would constitute significant non-linearity?

If I consider just the first non-linear quadratic term and bear in mind that globally average temperatures have possibly increased by 5ºC since the glacial maximum and also vary by around 4ºC pk-pk during the year due to orbital eccentricity, I might consider how big the the quadratic term could be without risking a point of instability somewhere in the plausible range of globally averaged temperatures which I will take quite arbitarily as +/-5ºC.

If one presumed that we are at least 5ºC away from such an instability due to a quadratic term there is a limit to how big the the quadratic component over this narrower 1ºC range could be. It is I think around 5% of the linear component at both +/-0.5ºC which would extrapolate out to 50% at +/-5ºC which would indicate a point of instability somewhere in the +/-5ºC range.

My point being that plausible values of non-linearity may be rather small.

I have had a brief look at how big a quadratic non-linearity in the forcing term could be before the fit would be lost. I found that the fit improves until the quadratic term reaches about 6% of the linear term at the peak values which is the same ballpark as what could be plausibly consistent with a +/-5ºC range in global average temperatures without encountering an instability.

None of these figures is all that important or sound, and the actual function could be anything but quadratic, my sole point being to query whether the apparent goodness of this linear fit is in anyway incompatable with the amounts of non-linearity that might be plausible.

Alex