torchphysics.models package

Contains different PyTorch models which can be trained to approximate the solution of a differential equation.

Additional basic network structures are implemented, meant to stabilize and speed up the trainings process. (adaptive weights, normalization layers)

If different models for different parts of the differential equation should be applied, this can be achieved by using the classes torchphysics.models.Sequential and torchphysics.models.Parallel.

Here you also find the parameters that can be learned in inverse problems.

Subpackages

Submodules

torchphysics.models.FNO module

class torchphysics.models.FNO.FNO(input_space, output_space, fourier_layers: int, hidden_channels: int = 16, fourier_modes=16, activations=Tanh(), skip_connections=False, linear_connections=True, bias=True, channel_up_sample_network=None, channel_down_sample_network=None, xavier_gains=1.6666666666666667, space_resolution=None)[source]

Bases: Model

The Fourier Neural Operator original developed in [1].

Parameters:
  • input_space (Space) – The space of the points the can be put into this model.

  • output_space (Space) – The space of the points returned by this model.

  • fourier_layers (int) – The number of fourier layers of this network. Each fourier layer consists of a spectral convolution with learnable kernels. See [1] for an overview of the model. Linear transformations and skip connections can be enabled in each layer as well.

  • hidden_channles (int) – The number of hidden channels.

  • fourier_modes (int or list, tuple) – The number of Fourier modes that will be used for the spectral convolution in each layer. Modes over the given value will be truncated, and in case of not enough modes they are padded with 0. In case of a 1D space domain you can pass in one integer or a list of integers, such that in each layer a different amount of modes is used. In case of a N-dimensional space domain a list (or tuple) of N numbers must be passed in (Setting the modes for each direction), or again a list of list containig each N numbers to vary the modes per layer.

  • activations (torch.nn or list, tuple) – The activation function after each Fourier layer. Default is torch.nn.Tanh()

  • skip_connections (bool or list, tuple) – If a skip connection is enabled in each Fourier layer, adding the original input of the layer to the output without any transformations.

  • linear_connection (bool or list, tuple) – If the input of each Fourier layer should also be transformed by a (learnable) linear mapping and added to the output.

  • bias (bool or list, tuple) – If the above linear connection should include a (learnable) bias vector.

  • channel_up_sample_network (torch.nn) – The network that transforms the input channel dimension to the hidden channel dimension. (The mapping P in [1], Figure 2) Default is a linear mapping.

  • channel_down_sample_network (torch.nn) – The network that transforms the hidden channel dimension to the output channel dimension. (The mapping Q in [1], Figure 2) Default is a linear mapping.

  • xavier_gains (int or list, tuple) – For the weight initialization a Xavier/Glorot algorithm will be used. The gain can be specified over this value. Default is 5/3.

  • space_resolution (int or None) – The resolution of the space grid used for training. This value is optional. If specified, a batch normalization over the space dimension will be applied in each Fourier layer. This leads to smoother solutions and better local approximations. But (currently) removes the super resolution property of the FNO. This is currently only possible for 1D space dimensions.

Notes

The FNO assumes that the data is of the shape

(batch, space_dim_1, …, space_dim_n, channels).

E.g. for a one dimensional problem we have (batch, grid points, channels). Additionally, the data needs to exists on a uniform grid to accurately compute the Fourier transformation.

Note, this networks assumes that the input and output are real numbers. It does not work in the case of complex numbers.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchphysics.models.activation_fn module

class torchphysics.models.activation_fn.AdaptiveActivationFunction(activation_fn, inital_a=1.0, scaling=1.0)[source]

Bases: Module

Implementation of the adaptive activation functions used in [2]. Will create activations of the form: activation_fn(scaling*a * x), where activation_fn is an arbitrary function, a is the additional hyperparameter and scaling is an additional scaling factor.

Parameters:
  • activation_fn (torch.nn.module) – The underlying function that should be used for the activation.

  • inital_a (float, optional) – The inital value for the adaptive parameter a. Changes the ‘slop’ of the underlying function. Default is 1.0

  • scaling (float, optional) – An additional scaling factor, such that the ‘a’ only has to learn only small values. Will stay fixed while training. Default is 1.0

Notes

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.activation_fn.ReLUn(n)[source]

Bases: Module

Implementation of a smoother version of ReLU, in the form of relu(x)**n.

Parameters:

n (float) – The power to which the inputs should be rasied before appplying the rectified linear unit function.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.activation_fn.Sinus[source]

Bases: Module

Implementation of a sinus activation.

forward(input)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.activation_fn.relu_n(*args, **kwargs)[source]

Bases: Function

static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, x, n)[source]

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

  • See combining-forward-context for more details

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See extending-autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

torchphysics.models.deepritz module

class torchphysics.models.deepritz.DeepRitzNet(input_space, output_space, width, depth)[source]

Bases: Model

Implementation of the architecture used in the Deep Ritz paper [1]. Consists of fully connected layers and residual connections.

Parameters:
  • input_space (Space) – The space of the points the can be put into this model.

  • output_space (Space) – The space of the points returned by this model.

  • width (int) – The width of the used hidden fully connected layers.

  • depth (int) – The amount of subsequent residual blocks.

Notes

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchphysics.models.fcn module

class torchphysics.models.fcn.FCN(input_space, output_space, hidden=(20, 20, 20), activations=Tanh(), xavier_gains=1.6666666666666667)[source]

Bases: Model

A simple fully connected neural network.

Parameters:
  • input_space (Space) – The space of the points the can be put into this model.

  • output_space (Space) – The space of the points returned by this model.

  • hidden (list or tuple) – The number and size of the hidden layers of the neural network. The lenght of the list/tuple will be equal to the number of hidden layers, while the i-th entry will determine the number of neurons of each layer. E.g hidden = (10, 5) -> 2 layers, with 10 and 5 neurons.

  • activations (torch.nn or list, optional) – The activation functions of this network. If a single function is passed as an input, will use this function for each layer. If a list is used, will use the i-th entry for i-th layer. Deafult is nn.Tanh().

  • xavier_gains (float or list, optional) – For the weight initialization a Xavier/Glorot algorithm will be used. The gain can be specified over this value. Default is 5/3.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.fcn.Harmonic_FCN(input_space, output_space, max_frequenz: int, hidden=(20, 20, 20), min_frequenz: int = 0, activations=Tanh(), xavier_gains=1.6666666666666667)[source]

Bases: Model

A fully connected neural network, that for the input \(x\) will also compute (and use) the values \((\cos(\pi x), \sin(\pi x), ..., \cos(n \pi x), \sin(n \pi x))\). as an input. See for example [3], for some theoretical background, on why this may be advantageous. Should be used in sequence with a normalization layer, to get inputs in the range of [-1, 1] with the cos/sin functions.

Parameters:
  • input_space (Space) – The space of the points the can be put into this model.

  • output_space (Space) – The space of the points returned by this model.

  • hidden (list or tuple) – The number and size of the hidden layers of the neural network. The lenght of the list/tuple will be equal to the number of hidden layers, while the i-th entry will determine the number of neurons of each layer. E.g hidden = (10, 5) -> 2 layers, with 10 and 5 neurons.

  • max_frequenz (int) – The highest frequenz that should be used in the input computation. Equal to \(n\) in the above describtion.

  • min_frequenz (int) – The smallest frequenz that should be used. Usefull, if it is expected, that only higher frequenzies appear in the solution. Default is 0.

  • activations (torch.nn or list, optional) – The activation functions of this network. If a single function is passed as an input, will use this function for each layer. If a list is used, will use the i-th entry for i-th layer. Deafult is nn.Tanh().

  • xavier_gains (float or list, optional) – For the weight initialization a Xavier/Glorot algorithm will be used. The gain can be specified over this value. Default is 5/3.

Notes

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.fcn.Polynomial_FCN(input_space, output_space, polynomial_degree=1, hidden=(20, 20, 20), activation=Tanh(), xavier_gains=1.6666666666666667, res_connection=False)[source]

Bases: Model

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchphysics.models.model module

class torchphysics.models.model.AdaptiveWeightLayer(n)[source]

Bases: Module

Adds adaptive weights to the non-reduced loss. The weights are maximized by reversing the gradients, similar to the idea in [4]. Should currently only be used with fixed points.

Parameters:

n (int) – The amount of sampled points in each batch.

Notes

class GradReverse(*args, **kwargs)[source]

Bases: Function

static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, x)[source]

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

  • See combining-forward-context for more details

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See extending-autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

classmethod grad_reverse(x)[source]
class torchphysics.models.model.Model(input_space, output_space)[source]

Bases: Module

Neural networks that can be trained to fulfill user-defined conditions.

Parameters:
  • input_space (Space) – The space of the points the can be put into this model.

  • output_space (Space) – The space of the points returned by this model.

class torchphysics.models.model.NormalizationLayer(domain)[source]

Bases: Model

A first layer that scales a domain to the range (-1, 1)^domain.dim, since this can improve convergence during training.

Parameters:

domain (Domain) – The domain from which this layer expects sampled points. The layer will use its bounding box to compute the normalization factors.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.model.Parallel(*models)[source]

Bases: Model

A model that wraps multiple models which should be applied in parallel.

Parameters:

*models – The models that should be evaluated parallel. The evaluation happens in the order that the models are passed in. The outputs of the models will be concatenated. The models are not allowed to have the same output spaces, but can have the same input spaces.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.model.Sequential(*models)[source]

Bases: Model

A model that wraps multiple models which should be applied sequentially.

Parameters:

*models – The models that should be evaluated sequentially. The evaluation happens in the order that the models are passed in. To work correcty the output of the i-th model has to fit the input of the i+1-th model.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchphysics.models.parameter module

class torchphysics.models.parameter.Parameter(init, space, **kwargs)[source]

Bases: Points

A parameter that is part of the problem and can be learned during training.

Parameters:
  • init (number, list, array or tensor) – The inital guess for the parameter.

  • space (torchphysics.problem.spaces.Space) – The Space to which this parameter belongs. Essentially defines the shape of the parameter, e.g for a single number use R1.

Notes

To use these Parameters during training they have to passed on to the used condition. If many different parameters are used they have to be connected over .join(), see the Points-Class for the exact usage.

If the domains itself should depend on some parameters or the solution sholud be learned for different parameter values, this class should NOT be used. These parameters are mostly meant for inverse problems. Instead, the parameters have to be defined with their own domain and samplers.

torchphysics.models.qres module

class torchphysics.models.qres.QRES(input_space, output_space, hidden=(20, 20, 20), activations=Tanh(), xavier_gains=1.6666666666666667)[source]

Bases: Model

Implements the quadratic residual networks from [5]. Instead of a linear layer, a quadratic layer W_1*x (*) W_2*x + W_1*x + b will be used. Here (*) means the hadamard product of two vectors (elementwise multiplication).

Parameters:
  • input_space (Space) – The space of the points the can be put into this model.

  • output_space (Space) – The space of the points returned by this model.

  • hidden (list or tuple) – The number and size of the hidden layers of the neural network. The lenght of the list/tuple will be equal to the number of hidden layers, while the i-th entry will determine the number of neurons of each layer. E.g hidden = (10, 5) -> 2 layers, with 10 and 5 neurons.

  • activations (torch.nn or list, optional) – The activation functions of this network. If a single function is passed as an input, will use this function for each layer. If a list is used, will use the i-th entry for i-th layer. Deafult is nn.Tanh().

  • xavier_gains (float or list, optional) – For the weight initialization a Xavier/Glorot algorithm will be used. The gain can be specified over this value. Default is 5/3.

Notes

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchphysics.models.qres.Quadratic(in_features, out_features, xavier_gains)[source]

Bases: Module

Implements a quadratic layer of the form: W_1*x (*) W_2*x + W_1*x + b. Here (*) means the hadamard product of two vectors (elementwise multiplication). W_1, W_2 are weight matrices and b is a bias vector.

Parameters:
  • in_features (int) – size of each input sample.

  • out_features – size of each output sample.

  • xavier_gains (float or list) – For the weight initialization a Xavier/Glorot algorithm will be used. The gain can be specified over this value. Default is 5/3.

forward(points)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property in_features
property out_features