torchphysics.utils package
Useful helper methods for the definition and evaluation of a problem.
For the creation of conditions, some differential operators are implemented under torchphysics.utils.differentialoperators.
For the evaluation of the trained model, some plot and animation functionalities are provided. They can give you a rough overview of the determined solution. These lay under torchphysics.utils.plotting
Subpackages
- torchphysics.utils.data package
- torchphysics.utils.differentialoperators namespace
- torchphysics.utils.plotting package
Submodules
torchphysics.utils.callbacks module
- class torchphysics.utils.callbacks.PlotterCallback(model, plot_function, point_sampler, log_name='plot', check_interval=200, angle=[30, 30], plot_type='', **kwargs)[source]
Bases:
CallbackObject for plotting (logging plots) inside of tensorboard. Can be passed to the pytorch lightning trainer.
- Parameters:
plot_function (callable) – A function that specfices the part of the model that should be plotted. A function that specfices the part of the model that should be plotted.
point_sampler (torchphysics.samplers.PlotSampler) – A sampler that creates the points that should be used for the plot.
log_interval (str, optional) – Name of the plots inside of tensorboard.
check_interval (int, optional) – Plots will be saved every check_interval steps, if the plotter is used.
angle (list, optional) – The view angle for surface plots. Standard angle is [30, 30]
plot_type (str, optional) – Specifies how the output should be plotted. If no input is given, the method will try to use a fitting way, to show the data. See also plot-functions.
kwargs – Additional arguments to specify different parameters/behaviour of the plot. See https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.html for possible arguments of each underlying object.
- class torchphysics.utils.callbacks.TrainerStateCheckpoint(path, name, check_interval=200, weights_only=False)[source]
Bases:
CallbackA callback to save the current state of the trainer (a PyTorch Lightning checkpoint), if the training has to be resumed at a later point in time.
- Parameters:
path (str) – The relative path of the saved weights.
name (str) – A name that will become part of the file name of the saved weights.
check_interval (int, optional) – Checkpoints will be saved every check_interval steps. Default is 200.
weights_only (bool, optional) – If only the model parameters should be saved. Default is false.
Note
To continue from the checkpoint, use ckpt_path=”some/path/to/my_checkpoint.ckpt” as argument in the fit command of the trainer.
The PyTorch Lightning checkpoint would save the current epoch and restart from it. In TorchPhysics we dont use multiple epochs, instead we train with multiple iterations inside “one giant epoch”. If the training is restarted, the trainer will always start from iteration 0 (essentially the last completed epoch). But all other states (model, optimizer, …) will be correctly restored.
- class torchphysics.utils.callbacks.WeightSaveCallback(model, path, name, check_interval, save_initial_model=False, save_final_model=True)[source]
Bases:
CallbackA callback to save the weights of a model during training. Can save the model weights before, during and after training. During training, only the model with minimal loss will be saved.
- Parameters:
model (torch.nn.Module) – The model of which the weights should be saved.
path (str) – The relative path of the saved weights.
name (str) – A name that will become part of the file name of the saved weights.
check_interval (int) – The callback will check for minimal loss every check_interval iterations. If negative, no weights will be saved during training.
save_initial_model (False) – Whether the model should be saved before training as well.
save_final_model (True) – Whether the model should always be saved after the last iteration.
torchphysics.utils.evaluation module
File contains different helper functions to get specific informations about the computed solution.
- torchphysics.utils.evaluation.compute_min_and_max(model, sampler, evaluation_fn=<function <lambda>>, device='cpu', requieres_grad=False)[source]
Computes the minimum and maximum values of the model w.r.t. the given variables.
- Parameters:
model (DiffEqModel) – A neural network of which values should be computed.
sampler (torchphysics.samplers.PointSampler) – A sampler that creates the points where the model should be evaluated.
evaluation_fn (callable) – A user-defined function that uses the neural network and creates the desiered output quantity.
device (str or torch device) – The device of the model.
track_gradients (bool) – Whether to track input gradients or not.
- Returns:
float – The minimum value computed.
float – The maximum value computed.
torchphysics.utils.user_fun module
Contains a class which extracts the needed arguments of an arbitrary methode/function and wraps them for future usage. E.g correctly choosing the needed arguments and passing them on to the original function.
- class torchphysics.utils.user_fun.DomainUserFunction(fun, defaults={}, args={})[source]
Bases:
UserFunctionExtension of the original UserFunctions, that are used in the Domain-Class.
- Parameters:
fun (callable) – The original function that should be wrapped.
defaults (dict, optional) – Possible defaults arguments of the function. If none are specified will check by itself if there are any.
args (dict, optional) – All arguments of the function. If none are specified will check by itself if there are any.
Notes
The only difference to normal UserFunction is how the evaluation of the original function is handled. Since all Domains use Pytorch, we check that the output always is a torch.tensor. In the case that the function is not constant, we also append an extra dimension to the output, so that the domains can work with it correctly.
- class torchphysics.utils.user_fun.UserFunction(fun, defaults={}, args={})[source]
Bases:
objectWraps a function, so that it can be called with arbitrary input arguments.
- Parameters:
fun (callable) – The original function that should be wrapped.
defaults (dict, optional) – Possible defaults arguments of the function. If none are specified will check by itself if there are any.
args (dict, optional) – All arguments of the function. If none are specified will check by itself if there are any.
Notes
Uses inspect.getfullargspec(fun) to get the possible input arguments. When called just extracts the needed arguments and passes them to the original function.
- __call__(args={}, vectorize=False)[source]
To evalute the function. Will automatically extract the needed arguments from the input data and will set the possible default values.
- Parameters:
args (dict or torchphysics.Points) – The input data, where the function should be evaluated.
vectorize (bool, optional) – If the original function can work with a batch of data, or a loop needs to be used to evaluate the function. default is False, which means that we assume the function can work with a batch of data.
- Returns:
The output values of the function.
- Return type:
torch.tensor
- apply_to_batch(inp)[source]
Apply the function to a batch of elements by running a for-loop. we assume that all inputs either have batch (i.e. maximum) dimension or are a constant param.
- Parameters:
inp (torchphysics.points) – The Points-object of the input data
- Returns:
The output values of the function, for each input.
- Return type:
torch.tensor
- evaluate_function(**inp)[source]
Evaluates the original input function. Should not be used directly, rather use the call-methode.
- property necessary_args
Returns the function arguments that are needed to evaluate this function.
- Returns:
The needed arguments.
- Return type:
- property optional_args
Returns the function arguments that are optional to evaluate this function.
- Returns:
The optional arguments.
- Return type:
- partially_evaluate(**args)[source]
(partially) evaluates a given function.
- Parameters:
**args – The arguments where the function should be (partially) evaluated.
- Returns:
Out – If the input arguments are enough to evalate the whole function, the corresponding output is returned. If some needed arguments are missing, a copy of this UserFunction will be returned. Whereby the values of **args will be added to the default values of the returned UserFunction.
- Return type:
value or UserFunction