torchphysics.utils.differentialoperators namespace

Submodules

torchphysics.utils.differentialoperators.differenceoperators module

File contains difference operators to approximate derivatives of discrete functions. Intended for the derivatives of operator approaches like FNO.

torchphysics.utils.differentialoperators.differenceoperators.discrete_grad_on_grid(model_out, grid_size)[source]

Approximates the gradient of a discrete function using finite differences.

Parameters:
  • model_out (torch.Tensor) – The discrete function to approximate the gradient for.

  • grid_size (float) – The step size used for the finite difference approximation and underlying grid.

Notes

This methode assumes that the input function which the gradient should be computed of is defined on a regular equidistant grid. The shape of function is assumed to be of the form (batch_size, N_1, N_2, …, N_d, dim), where dim is the output dimension of the functions and N_i is the resolution in the different space directions. The gradient will computed in all d directions. A central difference scheme is used for the approximation and at the boundary a one-sided difference scheme is used (also of order 2).

torchphysics.utils.differentialoperators.differenceoperators.discrete_laplacian_on_grid(model_out, grid_size)[source]

Approximates the laplacian of a discrete function using finite differences.

Parameters:
  • model_out (torch.Tensor) – The discrete function to approximate the laplacian for.

  • grid_size (float) – The step size used for the finite difference approximation and underlying grid.

Notes

This methode assumes the same properties as discrete_grad_on_grid.

torchphysics.utils.differentialoperators.differentialoperators module

File contains differentialoperators

NOTE: We aim to make the computation of differential operaotrs more efficient

by building an intelligent framework that is able to keep already computed derivatives and therefore make the computations more efficient.

torchphysics.utils.differentialoperators.differentialoperators.convective(deriv_out, convective_field, *derivative_variable)[source]

Computes the convective term \((v \cdot \nabla)u\) that appears e.g. in material derivatives. Note: This is not the whole material derivative.

Parameters:
  • deriv_out (torch.tensor) – The vector or scalar field \(u\) that is convected and should be differentiated.

  • convective_field (torch.tensor) – The flow vector field \(v\). Should have the same dimension as derivative_variable.

  • derivative_variable (torch.tensor) – The spatial variable in which respect deriv_out should be differentiated.

Returns:

A vector or scalar (+batch-dimension) Tensor, that contains the convective derivative.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.div(model_out, *derivative_variable)[source]

Computes the divergence of a network with respect to the given variable. Only for vector valued inputs, for matices use the function matrix_div. :param model_out: The output tensor of the neural network :type model_out: torch.tensor :param derivative_variable: The input tensor of the variables in which respect the derivatives have to

be computed. Have to be in a consistent ordering, if for example the output is u = (u_x, u_y) than the variables has to passed in the order (x, y)

Returns:

A Tensor, where every row contains the values of the divergence of the model w.r.t the row of the input variable.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.grad(model_out, *derivative_variable)[source]

Computes the gradient of a network with respect to the given variable. :param model_out: The (scalar) output tensor of the neural network :type model_out: torch.tensor :param derivative_variable: The input tensor of the variables in which respect the derivatives have to

be computed

Returns:

A Tensor, where every row contains the values of the the first derivatives (gradient) w.r.t the row of the input variable.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.jac(model_out, *derivative_variable)[source]

Computes the jacobian of a network output with respect to the given input.

Parameters:
  • model_out (torch.tensor) – The output tensor in which respect the jacobian should be computed.

  • derivative_variable (torch.tensor) – The input tensor in which respect the jacobian should be computed.

Returns:

A Tensor of shape (b, m, n), where every row contains a jacobian.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.laplacian(model_out, *derivative_variable, grad=None)[source]

Computes the laplacian of a network with respect to the given variable

Parameters:
  • model_out (torch.tensor) – The (scalar) output tensor of the neural network

  • derivative_variable (torch.tensor) – The input tensor of the variables in which respect the derivatives have to be computed

  • grad (torch.tensor) – If the gradient has already been computed somewhere else, it is more efficient to use it again.

Returns:

A Tensor, where every row contains the value of the sum of the second derivatives (laplace) w.r.t the row of the input variable.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.matrix_div(model_out, *derivative_variable)[source]

Computes the divergence for matrix/tensor-valued functions.

Parameters:
  • model_out (torch.tensor) – The (batch) of matirces that should be differentiated.

  • derivative_variable (torch.tensor) – The spatial variable in which respect should be differentiated.

Returns:

A Tensor of vectors of the form (batch, dim), containing the divegrence of the input.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.normal_derivative(model_out, normals, *derivative_variable)[source]

Computes the normal derivativ of a network with respect to the given variable and normal vectors.

Parameters:
  • model_out (torch.tensor) – The (scalar) output tensor of the neural network

  • derivative_variable (torch.tensor) – The input tensor of the variables in which respect the derivatives have to be computed

  • normals (torch.tensor) – The normal vectors at the points where the derivative has to be computed. In the form: normals = tensor([normal_1, normal_2, …]

Returns:

A Tensor, where every row contains the values of the normal derivatives w.r.t the row of the input variable.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.partial(model_out, *derivative_variables)[source]

Computes the (n-th, possibly mixed) partial derivative of a network output with respect to the given variables.

Parameters:
  • model_out (torch.tensor) – The output tensor of the neural network

  • derivative_variables (torch.tensor(s)) – The input tensors in which respect the derivatives should be computed. If n tensors are given, the n-th (mixed) derivative will be computed.

Returns:

A Tensor, where every row contains the values of the computed partial derivative of the model w.r.t the row of the input variable.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.rot(model_out, *derivative_variable)[source]

Computes the rotation/curl of a 3-dimensional vector field (given by a network output) with respect to the given input.

Parameters:
  • model_out (torch.tensor) – The output tensor of shape (b, 3) in which respect the roation should be computed.

  • derivative_variable (torch.tensor) – The input tensor of shape (b, 3) in which respect the rotation should be computed.

Returns:

A Tensor of shape (b, 3), where every row contains a rotation/curl vector for a given batch element.

Return type:

torch.tensor

torchphysics.utils.differentialoperators.differentialoperators.sym_grad(model_out, *derivative_variable)[source]

Computes the symmetric gradient: \(0.5(\nabla u + \nabla u^T)\).

Parameters:
  • model_out (torch.tensor) – The vector field \(u\) that should be differentiated.

  • derivative_variable (torch.tensor) – The spatial variable in which respect model_out should be differentiated.

Returns:

A Tensor of matrices of the form (batch, dim, dim), containing the symmetric gradient.

Return type:

torch.tensor