Contains different PyTorch models which can be trained to
approximate the solution of a differential equation.
Additional basic network structures are implemented, meant to stabilize and speed up
the trainings process. (adaptive weights, normalization layers)
If different models for different parts of the differential equation should be applied, this can be
achieved by using the classes torchphysics.models.Sequential and torchphysics.models.Parallel.
Here you also find the parameters that can be learned in inverse problems.
The Fourier Neural Operator original developed in [1].
Parameters:
input_space (Space) – The space of the points the can be put into this model.
output_space (Space) – The space of the points returned by this model.
fourier_layers (int) – The number of fourier layers of this network. Each fourier layer consists
of a spectral convolution with learnable kernels. See [1] for an overview
of the model. Linear transformations and skip connections can be enabled
in each layer as well.
hidden_channles (int) – The number of hidden channels.
fourier_modes (int or list, tuple) – The number of Fourier modes that will be used for the spectral convolution
in each layer. Modes over the given value will be truncated, and in case
of not enough modes they are padded with 0.
In case of a 1D space domain you can pass in one integer or a list of
integers, such that in each layer a different amount of modes is used.
In case of a N-dimensional space domain a list (or tuple) of N numbers
must be passed in (Setting the modes for each direction), or again
a list of list containig each N numbers to vary the modes per layer.
activations (torch.nn or list, tuple) – The activation function after each Fourier layer.
Default is torch.nn.Tanh()
skip_connections (bool or list, tuple) – If a skip connection is enabled in each Fourier layer, adding the original
input of the layer to the output without any transformations.
linear_connection (bool or list, tuple) – If the input of each Fourier layer should also be transformed by a
(learnable) linear mapping and added to the output.
bias (bool or list, tuple) – If the above linear connection should include a (learnable) bias vector.
channel_up_sample_network (torch.nn) – The network that transforms the input channel dimension to the
hidden channel dimension. (The mapping P in [1], Figure 2)
Default is a linear mapping.
channel_down_sample_network (torch.nn) – The network that transforms the hidden channel dimension to the
output channel dimension. (The mapping Q in [1], Figure 2)
Default is a linear mapping.
xavier_gains (int or list, tuple) – For the weight initialization a Xavier/Glorot algorithm will be used.
The gain can be specified over this value.
Default is 5/3.
space_resolution (int or None) – The resolution of the space grid used for training. This value is optional.
If specified, a batch normalization over the space dimension will be applied
in each Fourier layer. This leads to smoother solutions and better local
approximations. But (currently) removes the super resolution property of the
FNO. This is currently only possible for 1D space dimensions.
Notes
The FNO assumes that the data is of the shape
(batch, space_dim_1, …, space_dim_n, channels).
E.g. for a one dimensional problem we have (batch, grid points, channels).
Additionally, the data needs to exists on a uniform grid to accurately
compute the Fourier transformation.
Note, this networks assumes that the input and output are real numbers.
It does not work in the case of complex numbers.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Implementation of the adaptive activation functions used in [2].
Will create activations of the form: activation_fn(scaling*a * x),
where activation_fn is an arbitrary function, a is the additional
hyperparameter and scaling is an additional scaling factor.
Parameters:
activation_fn (torch.nn.module) – The underlying function that should be used for the activation.
inital_a (float, optional) – The inital value for the adaptive parameter a. Changes the ‘slop’
of the underlying function. Default is 1.0
scaling (float, optional) – An additional scaling factor, such that the ‘a’ only has to learn only
small values. Will stay fixed while training. Default is 1.0
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses.
(Defining this function is equivalent to defining the vjp function.)
It must accept a context ctx as the first argument, followed by
as many outputs as the forward() returned (None will be passed in
for non tensor outputs of the forward function),
and it should return as many tensors, as there were inputs to
forward(). Each argument is the gradient w.r.t the given output,
and each returned value should be the gradient w.r.t. the
corresponding input. If an input is not a Tensor or is a Tensor not
requiring grads, you can just pass None as a gradient for that input.
The context can be used to retrieve tensors saved during the forward
pass. It also has an attribute ctx.needs_input_grad as a tuple
of booleans representing whether each input needs gradient. E.g.,
backward() will have ctx.needs_input_grad[0]=True if the
first input to forward() needs gradient computed w.r.t. the
output.
Instead, you must also override the torch.autograd.Function.setup_context()
staticmethod to handle setting up the ctx object.
output is the output of the forward, inputs are a Tuple of inputs
to the forward.
See extending-autograd for more details
The context can be used to store arbitrary data that can be then
retrieved during the backward pass. Tensors should not be stored
directly on ctx (though this is not currently enforced for
backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward() if they are intended to be used in
backward (equivalently, vjp) or ctx.save_for_forward()
if they are intended to be used for in jvp.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
input_space (Space) – The space of the points the can be put into this model.
output_space (Space) – The space of the points returned by this model.
hidden (list or tuple) – The number and size of the hidden layers of the neural network.
The lenght of the list/tuple will be equal to the number
of hidden layers, while the i-th entry will determine the number
of neurons of each layer.
E.g hidden = (10, 5) -> 2 layers, with 10 and 5 neurons.
activations (torch.nn or list, optional) – The activation functions of this network. If a single function is passed
as an input, will use this function for each layer.
If a list is used, will use the i-th entry for i-th layer.
Deafult is nn.Tanh().
xavier_gains (float or list, optional) – For the weight initialization a Xavier/Glorot algorithm will be used.
The gain can be specified over this value.
Default is 5/3.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
A fully connected neural network, that for the input \(x\) will also
compute (and use) the values
\((\cos(\pi x), \sin(\pi x), ..., \cos(n \pi x), \sin(n \pi x))\).
as an input. See for example [3], for some theoretical background, on why this may be
advantageous.
Should be used in sequence with a normalization layer, to get inputs in the range
of [-1, 1] with the cos/sin functions.
Parameters:
input_space (Space) – The space of the points the can be put into this model.
output_space (Space) – The space of the points returned by this model.
hidden (list or tuple) – The number and size of the hidden layers of the neural network.
The lenght of the list/tuple will be equal to the number
of hidden layers, while the i-th entry will determine the number
of neurons of each layer.
E.g hidden = (10, 5) -> 2 layers, with 10 and 5 neurons.
max_frequenz (int) – The highest frequenz that should be used in the input computation.
Equal to \(n\) in the above describtion.
min_frequenz (int) – The smallest frequenz that should be used. Usefull, if it is expected, that
only higher frequenzies appear in the solution.
Default is 0.
activations (torch.nn or list, optional) – The activation functions of this network. If a single function is passed
as an input, will use this function for each layer.
If a list is used, will use the i-th entry for i-th layer.
Deafult is nn.Tanh().
xavier_gains (float or list, optional) – For the weight initialization a Xavier/Glorot algorithm will be used.
The gain can be specified over this value.
Default is 5/3.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Adds adaptive weights to the non-reduced loss. The weights are maximized by
reversing the gradients, similar to the idea in [4].
Should currently only be used with fixed points.
Parameters:
n (int) – The amount of sampled points in each batch.
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses.
(Defining this function is equivalent to defining the vjp function.)
It must accept a context ctx as the first argument, followed by
as many outputs as the forward() returned (None will be passed in
for non tensor outputs of the forward function),
and it should return as many tensors, as there were inputs to
forward(). Each argument is the gradient w.r.t the given output,
and each returned value should be the gradient w.r.t. the
corresponding input. If an input is not a Tensor or is a Tensor not
requiring grads, you can just pass None as a gradient for that input.
The context can be used to retrieve tensors saved during the forward
pass. It also has an attribute ctx.needs_input_grad as a tuple
of booleans representing whether each input needs gradient. E.g.,
backward() will have ctx.needs_input_grad[0]=True if the
first input to forward() needs gradient computed w.r.t. the
output.
Instead, you must also override the torch.autograd.Function.setup_context()
staticmethod to handle setting up the ctx object.
output is the output of the forward, inputs are a Tuple of inputs
to the forward.
See extending-autograd for more details
The context can be used to store arbitrary data that can be then
retrieved during the backward pass. Tensors should not be stored
directly on ctx (though this is not currently enforced for
backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward() if they are intended to be used in
backward (equivalently, vjp) or ctx.save_for_forward()
if they are intended to be used for in jvp.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
A model that wraps multiple models which should be applied in parallel.
Parameters:
*models – The models that should be evaluated parallel. The evaluation
happens in the order that the models are passed in.
The outputs of the models will be concatenated.
The models are not allowed to have the same output spaces, but can
have the same input spaces.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
A model that wraps multiple models which should be applied sequentially.
Parameters:
*models – The models that should be evaluated sequentially. The evaluation
happens in the order that the models are passed in.
To work correcty the output of the i-th model has to fit the input
of the i+1-th model.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
A parameter that is part of the problem and can be learned during training.
Parameters:
init (number, list, array or tensor) – The inital guess for the parameter.
space (torchphysics.problem.spaces.Space) – The Space to which this parameter belongs. Essentially defines the
shape of the parameter, e.g for a single number use R1.
Notes
To use these Parameters during training they have to passed on to the used
condition. If many different parameters are used they have to be connected over
.join(), see the Points-Class for the exact usage.
If the domains itself should depend on some parameters or the solution sholud be
learned for different parameter values, this class should NOT be used.
These parameters are mostly meant for inverse problems.
Instead, the parameters have to be defined with their own domain and samplers.
Implements the quadratic residual networks from [5].
Instead of a linear layer, a quadratic layer W_1*x (*) W_2*x + W_1*x + b
will be used. Here (*) means the hadamard product of two vectors
(elementwise multiplication).
Parameters:
input_space (Space) – The space of the points the can be put into this model.
output_space (Space) – The space of the points returned by this model.
hidden (list or tuple) – The number and size of the hidden layers of the neural network.
The lenght of the list/tuple will be equal to the number
of hidden layers, while the i-th entry will determine the number
of neurons of each layer.
E.g hidden = (10, 5) -> 2 layers, with 10 and 5 neurons.
activations (torch.nn or list, optional) – The activation functions of this network. If a single function is passed
as an input, will use this function for each layer.
If a list is used, will use the i-th entry for i-th layer.
Deafult is nn.Tanh().
xavier_gains (float or list, optional) – For the weight initialization a Xavier/Glorot algorithm will be used.
The gain can be specified over this value.
Default is 5/3.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Implements a quadratic layer of the form: W_1*x (*) W_2*x + W_1*x + b.
Here (*) means the hadamard product of two vectors (elementwise multiplication).
W_1, W_2 are weight matrices and b is a bias vector.
xavier_gains (float or list) – For the weight initialization a Xavier/Glorot algorithm will be used.
The gain can be specified over this value.
Default is 5/3.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.