Reference: Finite differencing#

class pystella.FiniteDifferencer(decomp, halo_shape, dx, stream=False, rank_shape=None, device=None, first_stencil=None, second_stencil=None, gradlap_lsize=None, grad_lsize=None, lap_lsize=None)[source]#

A convenience class for generating kernels which compute spatial gradients, Laplacians, and combinations thereof.

See SpectralCollocator for a version of this class implementing spectral collocation.

The following arguments are required:

Parameters:
  • decomp – An instance of DomainDecomposition.

  • halo_shape – The number of halo layers on (both sides of) each axis of the computational grid. Currently must be an int.

  • dx – A 3-tuple specifying the grid spacing of each axis.

The following keyword-only arguments are recognized:

Parameters:
  • first_stencil – A Callable with signature (f, direction) where f is a Field and direction indicates the spatial axis (1, 2, or 3) along which the stencil is taken, returning the (symbolic) first-order stencil. Defaults to the centered difference of the highest order allowed by the amount of array padding (set by halo_shape). See expand_stencil().

  • second_stencil – Like first_stencil, but for second-order differences.

  • rank_shape – A 3-tuple specifying the shape of looped-over arrays. Defaults to None, in which case these values are not fixed (and will be inferred when the kernel is called at a slight performance penalty).

__call__(queue, fx, *, lap=None, pdx=None, pdy=None, pdz=None, grd=None, allocator=None)[source]#

Computes requested derivatives of the input fx.

Parameters:
  • queue – A pyopencl.CommandQueue.

  • fx

    The array to compute derivatives of. Halos are shared using DomainDecomposition.share_halos(), and a kernel is called based on what combination of the remainin input arguments are not None.

    Valid combinations are

    • all of lap, pdx, pdy, and pdz (or equivalently lap and grd)

    • any single one of lap, pdx, pdy, or pdz

    • only pdx, pdy, and pdz (or equivalently only grd)

    If fx has shape (...,) + (rank_shape+2*halo_shape), all the outermost indices (i.e., in place of ...) are looped over. As an example, with halo_shape=1:

    >>> fx.shape, lap.shape
    ((2, 3, 130, 130, 130), (2, 3, 128, 128, 128))
    >>> derivs(queue, fx=fx, lap=lap)
    

    would loop over the outermost two axes with shape (2, 3). Note that the shapes of fx and lap (or in general all input arrays) must match on these outer axes.

  • lap – The array which will store the Laplacian of fx. Defaults to None.

  • pdx – The array which will store the \(x\)-derivative of fx. Defaults to None.

  • pdy – The array which will store the \(y\)-derivative of fx. Defaults to None.

  • pdz – The array which will store the \(z\)-derivative of fx. Defaults to None.

  • grd

    The array containing the gradient of fx, i.e., all three of pdx, pdy, and pdz. If supplied, any input values to pdx, pdy, or pdz are ignored and replaced via

    pdx = grd[..., 0, :, :, :]
    pdy = grd[..., 1, :, :, :]
    pdz = grd[..., 2, :, :, :]
    

    Defaults to None.

Returns:

The pyopencl.Event associated with the kernel invocation (i.e., of the last called kernel if multiple axes are being looped over).

divergence(queue, vec, div, allocator=None)[source]#

Computes the divergence of the input vec.

Parameters:
  • queue – A pyopencl.CommandQueue.

  • vec

    The array to compute the divergence of. Halos are shared using DomainDecomposition.share_halos().

    If vec has shape (...,) + (3, rank_shape+2*halo_shape), all the outermost indices (i.e., in place of ...) are looped over. As an example, with halo_shape=1:

    >>> vec.shape, div.shape
    ((2, 3, 130, 130, 130), (2, 128, 128, 128))
    >>> derivs.divergence(queue, vec, div)
    

    would loop over the outermost axis with shape (2,). Note that the shapes of vec and div must match on these outer axes.

  • div – The array which will store the divergence of vec.

Returns:

The pyopencl.Event associated with the kernel invocation (i.e., of the last called kernel if multiple axes are being looped over).

pystella.derivs.expand_stencil(f, coefs)[source]#

Expands a stencil over a field.

Parameters:
  • f – A Field.

  • coefs – A dict whose values are the coefficients of the stencil at an offset given by the key. The keys must be 3-tuples, and the values may be pymbolic expressions or constants.

Example:

>>> f = ps.Field("f", offset="h")
>>> coefs = {(1, 0, 0): 1, (-1, 0, 0): -1}
>>> stencil = ps.derivs.expand_stencil(f, coefs)
>>> print(ps.index_fields(stencil))
f[i + h + 1, j + h, k + h] + (-1)*f[i + h + -1, j + h, k + h]
pystella.derivs.centered_diff(f, coefs, direction, order)[source]#

A convenience wrapper to expand_stencil() for computing centered differences. By assuming the symmetry of the stencil (which has parity given by the parity of order), no redundant coefficients need to be supplied. Further, by supplying the direction parameter, the input offset (the keys of coefs) need only be integers.

Parameters:
  • f – A Field.

  • coefs – A dict whose values are the coefficients of the stencil at an offset given by the key. The keys must be integers, and the values may be pymbolic expressions or constants. Only non-redundant (offset, coefficient) pairs are needed.

  • direction – An integer in (1, 2, 3) denoting the direction over which to expand the stencil (i.e., to apply the offset).

  • order – The order of the derivative being computed, which determines whether coefficients at the opposite offset have the same or opposite sign.

Example:

>>> f = ps.Field("f", offset="h")
>>> coefs = {1: 1}
>>> stencil = ps.derivs.centered_diff(f, coefs, 1, 1)
>>> print(ps.index_fields(stencil))
f[i + h + 1, j + h, k + h] + (-1)*f[i + h + -1, j + h, k + h]