_{n}– the correction is denoted v

_{n}’) and the changes in pressure on either side of the velocity (p

_{N}’ and p

_{P}’). The aim is then to make a series of pressure corrections which will lead to velocity corrections so as to eliminate the continuity errors for each grid cell.

For the grid cell at P this leads to an equation (effectively a Poisson equation for pressure) that relates the pressure changes at P, N, S, E, W (for 2D) to the continuity error at P. When the pressure correction equation is solved for all grid nodes, and the velocities are updated, then continuity is satisfied at each grid cell. So – job done! Or is it?

Of course the answer is no. Because of approximations made in the relationship between velocity and pressure corrections (eg between v_{n}’ and p_{N}’ and p_{P}’) – which remember comes from an approximated form of the momentum equation for v_{n} – then, when solution of the full momentum equations is repeated at the next iteration, the velocities change again to non-continuity-satisfying values. These then need to be corrected again (via pressure corrections) – and so on. Thus the need for iteration. This is fine, so long as the iterative solution proceeds systematically to convergence.

I found however that in my case the approximations to the momentum equations in the velocity-correction/pressure-correction relationship were leading to the instability that I was observing.

The problem comes in approximating the effects of velocity corrections at neighbouring locations on the velocity in question (say v_{n}). The momentum equation tells us that v_{n} is affected via convection and diffusion (ie viscosity – and also, when using a turbulent viscosity model, by turbulent transport) by any changes in its immediate neighbours (v_{s}, v_{nn}, v_{ne}, v_{ns}). If, for example v_{nn} increases, it will tend to induce an increase in v_{n}, even in the absence of any changes in pressures – and so on. The way I thought of it is that there is a sort of “elastic” connecting these velocities, so that a change in one will tend to cause a similar change in all others.

The trouble is that these links cannot be included in full in the pressure correction algorithm. They must be approximated in some way. In the standard SIMPLE, as devised by Suhas Patankar, the practice is to neglect these effects – that is, to presume that neighbouring velocity corrections are zero. This is what I found was leading to the instabilities that I observed.

In any systematically-changing flow field (in my case, for example, the flow develops steadily as the solution proceeds downstream) the velocity changes occurring at any stage in the solution will tend to be in the same direction and of roughly similar magnitude. This means that the effects of the “elastic” will tend to be to cause velocity corrections in the required direction even without pressure corrections. Neglecting this effect, means that the pressure correction equation will “think” that it need to make bigger pressure changes than is really the case. This leads to an over prediction of pressure correction (sometimes a severe one), and the instability and oscillation that I observed.

The solution that I devised was to replace the assumption that “neighbouring velocity corrections are zero” by the assumption that “neighbouring velocity corrections will, on average, be the same as the central velocity correction” (ie the average changes in v_{nn}, v_{s}., v_{ne}, and v_{nw} will be the same as v_{n}’). Of course this is still an assumption, and is “wrong” – but, for many flow situations, it proves to be a better assumption than that in the standard SIMPLE algorithm.

I tried this for my square duct case, and found that I got unconditional stability for any value of step length, with no need for under-relaxation. So – success – it worked! I therefore used this treatment for all my PhD computations – it was described as “Modified SIMPLE” in my Ph D Thesis. It became known in the research group at Imperial College and later at CHAM, and was used fairly widely by both groups. At CHAM I recall that it was referred to as “the DGT modification”. Then, by the time PHOENICS was developed at CHAM around 1980 it had been replaced by Spalding’s SIMPLEST algorithm, which addresses this problem and others in a rather different way.

But that is not quite the end of the story. Exactly the same modification to SIMPLE was later quite independently devised by van Doormal and Raithby, and was published in 1984 as the “SIMPLEC algorithm”. It has since become widely known and used under this name.

**Postscript**

** **

## Comments 4

Post a CommentLindsay9:51 AM Aug 24, 2009

Naweed Khan1:32 AM Sep 19, 2009

David Tatchell8:27 AM Sep 21, 2009

Low Lee Leong11:58 AM Jul 29, 2011

## Add Your Comment

Please complete the following information to comment or

sign in.