Model Training

softsensor.train_model.train_model(model, train_loader, max_epochs, optimizer, device='cpu', criterion=MSELoss(), val_loader=None, patience=None, print_results=False, stabelizer=None, local_wd=None, give_results=True, rel_perm=0)[source]

training Function for Autoregressive Modelling of Time Series

Parameters:
  • model (nn.Module) – model must have a forward function to predict the output

  • train_loader (dataloader or list of dataloader) – dataloader for training if Model_Type is Feed_Forward or AR otherwise list of dataloader.

  • max_epochs (int) – Maximum number rof training epochs.

  • optimizer (torch.optim) – optimizer with trainable parameters of the model.

  • device (str, optional) – device for computation. The default is ‘cpu’.

  • criterion (nn.Loss, optional) – Loss function for training. The default is nn.MSELoss().

  • val_loader (dataloader or list of dataloader, optional) – dataloader for training if Model_Type is Feed_Forward or AR otherwise list of dataloader. The default is None.

  • patience (int, optional) – patience for the val loader (only needed if val_loader is not None). The default is None.

  • print_results (bool, optional) – True prints result of every epoch. The default is False.

  • stabelizer (float, optional) – stability score for Model_Type ‘AR’. The default is None.

  • local_wd (float, optional) – Applies a local weight decay on all weights that interact with the recurrent input for Model_Type ‘AR’. The default is None.

  • give_results (bool, optional) – Prints results if True in every epoch. The default is True.

  • rel_perm (float, optional) – relative permutation applied to the input to prevent overfitting. The default is 0.

Returns:

results – dictionary with arrays for train_loss, val_loss and stability_score.

Return type:

dict

Examples

Data Preprocessing

>>> import softsensor.meas_handling as ms
>>> import numpy as np
>>> import pandas as pd
>>> t = np.linspace(0, 1.0, 1001)
>>> d = {'inp1': np.random.randn(1001),
         'inp2': np.random.randn(1001),
         'out': np.random.randn(1001)}
>>> handler = ms.Meas_handling([pd.DataFrame(d, index=t)], ['train'],
                               ['inp1', 'inp2'], ['out'], fs=100)

Train an ARNN

>>> import softsensor.autoreg_models as am
>>> from softsensor.train_model import train_model
>>> import torch.optim as optim
>>> import torch.nn as nn
>>> model = am.ARNN(2, 1, 10, 10 , [16])
>>> train_dat, val_dat = handler.give_torch_loader(10, 'training', rnn_window=10,
                                                   shuffle=True)
>>> opt = optim.Adam(model.parameters(), lr=1e-4)
>>> crit = nn.MSELoss()
>>> results = train_model(model=model, train_loader=train_dat, max_epochs=5,
                          optimizer=opt, device='cpu', criterion=crit, stabelizer=5e-3,
                          val_loader=val_dat, print_results=False)
>>> print(results['val_loss'])

Train an ARNN with stabilits scheduling

>>> import softsensor.autoreg_models as am
>>> from softsensor.train_model import train_model
>>> from softsensor.stab_scheduler import get_scheduler
>>> import torch.optim as optim
>>> import torch.nn as nn
>>> model = am.ARNN(2, 1, 10, 10 , [16])
>>> stab = get_scheduler('log_lin', model, track_n=30)
>>> train_dat, val_dat = handler.give_torch_loader(10, 'training', rnn_window=10,
                                                   shuffle=True)
>>> opt = optim.Adam(model.parameters(), lr=1e-4)
>>> crit = nn.MSELoss()
>>> results = train_model(model=model, train_loader=train_dat, max_epochs=5,
                          optimizer=opt, device='cpu', criterion=crit, stabelizer=stab,
                          val_loader=val_dat, print_results=False)
>>> print(results['stabelizer'])

Train an ARNN with Mean Variance Estimation (MVE)

>>> import softsensor.autoreg_models as am
>>> from softsensor.train_model import train_model
>>> from softsensor.losses import HeteroscedasticNLL
>>> import torch.optim as optim
>>> mean_model = am.ARNN(2, 1, 10, 10 , [16])
>>> model = am.SeparateMVEARNN(2, 1, 10, 10, mean_model, [16])
>>> train_dat, val_dat = handler.give_torch_loader(10, 'training', rnn_window=10,
                                                   shuffle=True)
>>> opt = optim.Adam(model.parameters(), lr=1e-4)
>>> crit = HeteroscedasticNLL()
>>> results = train_model(model=model, train_loader=train_dat, max_epochs=5,
                          optimizer=opt, device='cpu', criterion=crit, stabelizer=5e-3,
                          val_loader=val_dat, print_results=False)
>>> print(results['val_loss'])

Train an RNN

>>> import softsensor.recurrent_models as rm
>>> from softsensor.train_model import train_model
>>> import torch.optim as optim
>>> import torch.nn as nn
>>> model = rm.RNN_DNN(2, 1, 10, 16, 1)
>>> train_dat = handler.give_list(10, 'training')
>>> opt = optim.Adam(model.parameters(), lr=1e-4)
>>> crit = nn.MSELoss()
>>> results = train_model(model=model, train_loader=train_dat, max_epochs=5,
                          optimizer=opt, device='cpu', criterion=crit,
                          val_loader=train_dat, print_results=False)
>>> print(results['val_loss'])
class softsensor.train_model.early_stopping(patience)[source]

Early Stopping funtion to prevent overfitting in the training process

Parameters:
  • model (pytorch Network) – Model from which parameters are temporarily stored to keep best models

  • patience (int) – Patience of the early stopping, precisely how many epochs without an increase in performance are allowed

Return type:

None.

call(loss, model)[source]

Calling function, for given val_loss and model

Parameters:
  • loss (list) – list of val_loss, individual elements have to be torch dtype

  • model (pytorch Network) – Network to store parameters from in case of improvement.

Returns:

True to stop training, False to continue training.

Return type:

bool