0.65![clip]diag_th_model/rad_shells.eps
In this subsection, we explain briefly how the gaseous model is solved numerically. We concentrate on aspects of the method not exposed in previous work. This description is therefore complementary to Sec. 2.2 of GS94. The algorithm used is a partially implicit Newton-Raphson-Henyey iterative scheme (HenyeyEtAl59, see also KW94, Sec. 11.2).
Putting aside the bounding conditions, the set of equations to be
solved are Eqs. 2.22 to 2.25. In
practice, however, the equations are rewritten using the logarithm of
all positive quantities as dependant functions. As explained in
GS94, this greatly improves energy conservation. Formally, one
may write this system as follows
where the
are the local quantities defining the state of the cluster, i.e.
with
in the present application.
To be solved numerically, this set of coupled partial differential equations have to be discretised according to time and radius. Let us first consider time stepping. Let be the time step. Assume we know the solution at time and want to compute . For the sake of numerical stability, a partially implicit scheme is used. We adopt the shorthand notations and . Time derivation is replaced by finite differences, x t t^-1(x -y ). In the terms , we replace the by which are values intermediate between and , , with for stability purpose GS94.
Spatial discretisation is done by defining all quantities (at a given time) on a radial mesh, with and . A staggered mesh is implemented. While values of , , , and are defined at the boundaries of the mesh cells, , and are defined at the centre of each cell, see Fig.2.2. When the value of a ``boundary'' quantity is needed at the centre of a cell, or vice-versa, one resort to simple averaging, i.e. , , if and are is border- and centre-defined quantities, and , their centre- and border-interpolations, respectively. For all runs presented here, and the points to are logarithmically equidistant with and . Let us adopt the notation for the value of at position (or ) and . Then, radial derivatives in the terms are approximated by finite differences, x r #<2692#>x -#<2698#>x r if the derivative has to be evaluated at a point where is defined (centre or border of a cell), or x r #<2710#>x -#<2716#>x r = x -x 2r otherwise. As an exception we use upstream differencing in for the second equation in set 2.22, i.e. the difference quotient is displaced by half a mesh point upstream to improve stability.
By making the substitutions for
and
in the set of differential
equations
2.72, one obtains, at each mesh point
, a set
of
non-linear algebraic equations linking the new values to be
determined,
and
, to
the ``old'' ones,
and
, which are known,
Note that the structure of the equations is the same at all mesh
points, except and
. In particular, terms do
not appear in
. Also, one has to keep in mind that
only the
and
are
unknown; the
and
play the role of fixed parameters in these
equation (as do the
). If one defines a
-dimension vector whose component
is
, one can write the system of
equations as
, i.e.
F^*(X^*) (
) = (
)
with X^* (
).
The system is solved iteratively using Newton-Raphson scheme. If is the approximation to the solution of Eq. 2.78 after iteration , with , the solution is refined through the relation
where is the inverse of the matrix of derivatives. The latter, of dimension , has the following structure
F^*X^* =
.
In this diagram, each square is a
sub-matrix. For
, lines
to
of
are composed
of a group of 3 such
matrices,
that span columns
to
, while the rest is composed of zeros,
We can see this more explicitely,
More schematically, to have an overview of the structure,
The Heyney method is a way to take advantage of the special structure
of matrix
to solve system (2.79)
efficiently, with a number of operation scaling like
rather than
as would be the case if one
uses a general-purpose matrix inversion scheme. Setting
and
,
Eq.(2.79) is equivalent to
(
) W^* = B^*
with the unknown vector. We further decompose vectors
and into
-dimensional sub-vectors,
each one representing the values at a given mesh points,
W^* =
.
Then, the system (2.83) can be written as a set of
coupled
-dimensional vector equations,
The algorithm operates in two passes. First, going from to
, one defines recursively a sequence of
-vectors
and
-matrices
through
is not defined. In the second pass, the
values of the unknown
are computed, climbing back
from
to , with
Note that, with this algorithm, only
matrices
have to be inverted. We use Gauss elimination for this purpose because
this venerable technique proves to be robust enough to properly deal
with the kind of badly conditioned matrices that often appear in this
application.
The initial model for the Newton-Raphson algorithm is given by the structure of the cluster at the previous time, One iterates until the following convergence criteria are met. Let us set . Then, the condition for logarithmic quantities is _i=1... 1 _k=1... (x )^2 < _1, with . For velocities (, , ), one checks _i=1... 1 _k=1... ( x x +_1 w )^2 < _2, with and . Generally, two iterations are sufficient to reach convergence.
0.95