torchphysics.problem.conditions package

Conditions are the central concept in this package. They supply the necessary training data to the model and translate the condition of the differential equation into the trainings condition of the neural network.

A tutorial on the usage of Conditions can be found here.

Submodules

torchphysics.problem.conditions.condition module

class torchphysics.problem.conditions.condition.AdaptiveWeightsCondition(module, sampler, residual_fn, error_fn=SquaredError(), track_gradients=True, data_functions={}, parameter=Parameter: {}, name='adaptive_w_condition', weight=1.0)[source]

Bases: SingleModuleCondition

A condition using an AdaptiveWeightLayer [1] to assign adaptive weights to all points during training.

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points in the domain of the residual function, could be an inner or a boundary domain.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • error_fn (callable) – Function that will be applied to the output of the residual_fn to compute the unreduced loss (shape [n_points]). The result will be multiplied by the adaptive weights.

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes

class torchphysics.problem.conditions.condition.Condition(name=None, weight=1.0, track_gradients=True)[source]

Bases: Module

A general condition which should be optimized or tracked.

Parameters:
  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

  • track_gradients (bool) – Whether to track input gradients or not. Helps to avoid tracking the gradients during validation. If a condition is applied during training, the gradients will always be tracked.

abstract forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.DataCondition(module, dataloader, norm, root=1.0, use_full_dataset=False, name='datacondition', constrain_fn=None, weight=1.0)[source]

Bases: Condition

A condition that fits a single given module to data (handed through a PyTorch dataloader).

Parameters:
  • module (torchphysics.Model) – The torch module which should be fitted to data.

  • dataloader (torch.utils.DataLoader) – A PyTorch dataloader which supplies the iterator to load data-target pairs from some given dataset. Data and target should be handed as points in input or output spaces, i.e. with the correct point object.

  • norm (int or 'inf') – The ‘norm’ which should be computed for evaluation. If ‘inf’, maximum norm will be used. Else, the result will be taken to the n-th potency (without computing the root!)

  • root (float) – the n-th root to be computed to obtain the final loss. E.g., if norm=2, root=2, the loss is the 2-norm.

  • use_full_dataset (bool) – Whether to perform single iterations or compute the error on the whole dataset during forward call. The latter can especially be useful during validation.

  • name (str) – The name of this condition which will be monitored in logging.

  • constrain_fn (callable, optional) – A additional transformation that will be applied to the network output. The function can use all the model inputs (e.g. space, time values) and the corresponding outputs (the solution approximation). Can be used to enforce some conditions (e.g. boundary values, or scaling the output)

  • weight (float) – The weight multiplied with the loss of this condition during training.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.DeepRitzCondition(module, sampler, integrand_fn, track_gradients=True, data_functions={}, parameter=Parameter: {}, name='deepritzcondition', weight=1.0)[source]

Bases: MeanCondition

Alias for MeanCondition.

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points in the domain of the residual function, could be an inner or a boundary domain.

  • integrand_fn (callable) – The integrand of the weak formulation of the differential equation.

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes

class torchphysics.problem.conditions.condition.HPCMCondition(module_state, module_corr, dataloader_corr, correction_fn, norm=2, root=1.0, use_full_dataset=True, name='hpcmcondition', weight=1.0)[source]

Bases: Condition

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.HPM_EquationLoss_at_DataPoints(module, dataloader, norm, residual_fn, error_fn=SquaredError(), root=1.0, use_full_dataset=False, name='HPMcondition', reduce_fn=<built-in method mean of type object>, parameter=Parameter: {}, weight=1.0)[source]

Bases: Condition

A condition that minimizes the mean squared error of the given residual with the help of data (handed through a PyTorch dataloader), as required in the framework of HPM [1].

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • dataloader (torch.utils.DataLoader) – A PyTorch dataloader which supplies the iterator to load data-target pairs from some given dataset. Data and target should be handed as points in input or output spaces, i.e. with the correct point object.

  • norm (int or 'inf') – The ‘norm’ which should be computed for evaluation. If ‘inf’, maximum norm will be used. Else, the result will be taken to the n-th potency (without computing the root!)

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes

. . [1] Raissi, M. (2018). Deep hidden physics models: Deep learning of nonlinear partial differential equations.

The Journal of Machine Learning Research, 19(1), 932-955.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.HPM_EquationLoss_at_Sampler(module, sampler, residual_fn, error_fn=SquaredError(), reduce_fn=<built-in method mean of type object>, name='SampleHPMCondition', track_gradients=True, data_functions={}, parameter=Parameter: {}, weight=1.0)[source]

Bases: Condition

A condition that minimizes the mean squared error of the given residual on sampled collocation points, instead of using the collocation points of the data set as the original proposal HPM [1].

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points in the domain of the residual function, could be an inner or a boundary domain.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes

. . [1] Raissi, M. (2018). Deep hidden physics models: Deep learning of nonlinear partial differential equations.

The Journal of Machine Learning Research, 19(1), 932-955.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.IntegroPINNCondition(module, sampler, residual_fn, integral_sampler, error_fn=SquaredError(), reduce_fn=<built-in method mean of type object>, name='periodiccondition', track_gradients=True, data_functions={}, parameter=Parameter: {}, weight=1.0)[source]

Bases: Condition

A condition that also allows to include the computation of integrals or convolutions by sampling a second set of points by an additional sampler.

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the usual set of points.

  • integral_sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points that can be used to approximate an integral.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal. The point set used to approximate the integral and the output of the model at these points are given as input {name}_integral

  • error_fn (callable) – Function that will be applied to the output of the residual_fn to compute the unreduced loss. Should reduce only along the 2nd (i.e. space-)axis.

  • reduce_fn (callable) – Function that will be applied to reduce the loss to a scalar. Defaults to torch.mean

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.MeanCondition(module, sampler, residual_fn, track_gradients=True, data_functions={}, parameter=Parameter: {}, name='meancondition', weight=1.0)[source]

Bases: SingleModuleCondition

A condition that minimizes the mean of the residual of a single module, can be used e.g. in Deep Ritz Method [2] or for energy functionals, since the mean can be seen as a (scaled) integral approximation.

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points in the domain of the residual function, could be an inner or a boundary domain.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes

class torchphysics.problem.conditions.condition.PINNCondition(module, sampler, residual_fn, track_gradients=True, data_functions={}, parameter=Parameter: {}, name='pinncondition', weight=1.0)[source]

Bases: SingleModuleCondition

A condition that minimizes the mean squared error of the given residual, as required in the framework of physics-informed neural networks [3].

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points in the domain of the residual function, could be an inner or a boundary domain.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes

class torchphysics.problem.conditions.condition.ParameterCondition(parameter, penalty, weight, name='parametercondition')[source]

Bases: Condition

A condition that applies a penalty term on some parameters which are optimized during the training process.

Parameters:
  • parameter (torchphysics.Parameter) – The parameter that should be optimized.

  • penalty (callable) – A user-defined function that defines a penalty term on the parameters.

  • weight (float) – The weight multiplied with the loss of the penalty during training.

  • name (str) – The name of this condition which will be monitored in logging.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.PeriodicCondition(module, periodic_interval, residual_fn, non_periodic_sampler=<torchphysics.problem.samplers.sampler_base.EmptySampler object>, error_fn=SquaredError(), reduce_fn=<built-in method mean of type object>, name='periodiccondition', track_gradients=True, data_functions={}, parameter=Parameter: {}, weight=1.0)[source]

Bases: Condition

A condition that allows to learn dependencies between points at the ends of a given Interval. Can be used e.g. for a variety of periodic boundary conditions.

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • periodic_interval (torchphysics.domains.Interval) – The interval on which’ boundary the periodic (boundary) condition will be set.

  • non_periodic_sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points for the axis that are not defined via the periodic_interval

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal. Instead of the name of the axis of the periodic interval, it takes {name}_left and {name}_right as an input. The same holds for all outputs of the network and the results of the data_functions.

  • error_fn (callable) – Function that will be applied to the output of the residual_fn to compute the unreduced loss. Should reduce only along the 2nd (i.e. space-)axis.

  • reduce_fn (callable) – Function that will be applied to reduce the loss to a scalar. Defaults to torch.mean

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.SingleModuleCondition(module, sampler, residual_fn, error_fn, reduce_fn=<built-in method mean of type object>, name='singlemodulecondition', track_gradients=True, data_functions={}, parameter=Parameter: {}, weight=1.0)[source]

Bases: Condition

A condition that minimizes the reduced loss of a single module.

Parameters:
  • module (torchphysics.Model) – The torch module which should be optimized.

  • sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points in the domain of the residual function, could be an inner or a boundary domain.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • error_fn (callable) – Function that will be applied to the output of the residual_fn to compute the unreduced loss. Should reduce only along the 2nd (i.e. space-)axis.

  • reduce_fn (callable) – Function that will be applied to reduce the loss to a scalar. Defaults to torch.mean

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.condition.SquaredError[source]

Bases: Module

Implements the sum of squared errors in space dimension.

forward(x)[source]

Computes the squared error of the input.

Parameters:

x (torch.tensor) – The values for which the squared error should be computed.

torchphysics.problem.conditions.deeponet_condition module

class torchphysics.problem.conditions.deeponet_condition.DeepONetDataCondition(module, dataloader, norm, constrain_fn=None, root=1.0, use_full_dataset=False, name='datacondition', weight=1.0)[source]

Bases: DataCondition

A condition that fits a single given module to data (handed through a PyTorch dataloader).

Parameters:
  • module (torchphysics.Model) – The torch module which should be fitted to data.

  • dataloader (torch.utils.DataLoader) – A PyTorch dataloader which supplies the iterator to load data-target pairs from some given dataset. Data and target should be handed as points in input or output spaces, i.e. with the correct point object.

  • norm (int or 'inf') – The ‘norm’ which should be computed for evaluation. If ‘inf’, maximum norm will be used. Else, the result will be taken to the n-th potency (without computing the root!)

  • constrain_fn (callable, optional) – A additional transformation that will be applied to the network output. The function gets as an input all the trunk inputs (e.g. space, time values) and the corresponding outputs of the final model (the solution approximation). Can be used to enforce some conditions (e.g. boundary values, or scaling the output)

  • root (float) – the n-th root to be computed to obtain the final loss. E.g., if norm=2, root=2, the loss is the 2-norm.

  • use_full_dataset (bool) – Whether to perform single iterations or compute the error on the whole dataset during forward call. The latter can especially be useful during validation.

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

class torchphysics.problem.conditions.deeponet_condition.DeepONetSingleModuleCondition(deeponet_model, function_set, input_sampler, residual_fn, error_fn, reduce_fn=<built-in method mean of type object>, name='singlemodulecondition', track_gradients=True, data_functions={}, parameter=Parameter: {}, weight=1.0)[source]

Bases: Condition

forward(device='cpu', iteration=None)[source]

The forward run performed by this condition.

Returns:

torch.Tensor

Return type:

the loss which should be minimized or monitored during training

class torchphysics.problem.conditions.deeponet_condition.PIDeepONetCondition(deeponet_model, function_set, input_sampler, residual_fn, name='pinncondition', track_gradients=True, data_functions={}, parameter=Parameter: {}, weight=1.0)[source]

Bases: DeepONetSingleModuleCondition

A condition that minimizes the mean squared error of the given residual, as required in the framework of physics-informed DeepONets [4].

Parameters:
  • deeponet_model (torchphysics.models.DeepONet) – The DeepONet-model, consisting of trunk and branch net that should be optimized.

  • function_set (torchphysics.domains.FunctionSet) – A FunctionSet that provides the different input functions for the branch net.

  • input_sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points inside the domain of the residual function, could be an inner or a boundary domain.

  • residual_fn (callable) – A user-defined function that computes the residual (unreduced loss) from inputs and outputs of the model, e.g. by using utils.differentialoperators and/or domain.normal

  • data_functions (dict) – A dictionary of user-defined functions and their names (as keys). Can be used e.g. for right sides in PDEs or functions in boundary conditions.

  • track_gradients (bool) – Whether gradients w.r.t. the inputs should be tracked during training or not. Defaults to true, since this is needed to compute differential operators in PINNs.

  • parameter (Parameter) – A Parameter that can be used in the residual_fn and should be learned in parallel, e.g. based on data (in an additional DataCondition).

  • name (str) – The name of this condition which will be monitored in logging.

  • weight (float) – The weight multiplied with the loss of this condition during training.

Notes