Recurrent Neural Network(RNN) Models

Created on Wed May 18 19:52:41 2022

@author: WET2RNG

class softsensor.recurrent_models.AR_RNN(input_channels, pred_size, window_size, rnn_window, blocks, num_layers, blocktype='LSTM', hidden_size=None, activation='relu', bias=True, dropout=None, forecast=1, Pred_Type='Point')[source]

Autoregressive Recurrent Network that utilises the past outputs in combination with hidden cells

Parameters:
  • input_channels (int) – Number of input channels

  • pred_size (int) – Number of predicted values

  • window_size (int) – Size of the sliding window applied to the time series

  • rnn_window (int) – Window Size of the Recurent Connection

  • blocks (int) – Number of parallel recurrent blocks.

  • num_layers (int) – nunmber of recurernt blocks.(depth of the recurretn part)

  • blocktype (str, optional) – blocktype, options are: ‘RNN’, ‘GRU’ and ‘LSTM’. The default is ‘LSTM’.

  • hidden_size (list of int or None, optional) – List gives the size of hidden units. The default is None.

  • activation (str, optional) – Activation function to activate the feature space. The default is ‘relu’.

  • bias (bool, optional) – If True, bias weights are used. The default is True.

  • dropout (float [0,1], optional) – Adds dropout layers after each Linear Layer. The default is None.

Return type:

None.

Examples

>>> import softsensor.recurrent_models
>>> import torch
>>> m = softsensor.recurrent_models.AR_RNN(2, 1, 10, 10, 16, 1)
>>> print(m)
AR_RNN(
  (RecBlock): _LSTM(
    (lstm): LSTM(30, 16, batch_first=True)
  )
  (DNN): Feed_ForwardNN(
    (DNN): Sequential(
      (0): Linear(in_features=16, out_features=1, bias=True)
    )
  )
)
>>> input = torch.randn(32, 2, 10)
>>> rec_input = torch.randn(32, 1, 10)
>>> output = m(input, rec_input)
>>> print(output.shape)
torch.Size([32, 1, 1])
estimate_uncertainty_mean_std(inp, x_rec, device='cpu')[source]
forward(inp, x_rec, device='cpu')[source]

Forward function to propagate through the network

Parameters:
  • inp (torch.tensor dtype=torch.float) – Input tensor for forward propagation, shape=[batch size, external channels, window_size]

  • x_rec (torch.tensor, dtype=torch.float) – Recurrent Input for forward Propagation. shape=[batch size, pred_size, rnn_window]

  • device (str, optional) – device to compute on. Needed because of the storage of the hidden cells. The default is ‘cpu’.

Returns:

output – shape=[batch size, pred_size, forecast]

Return type:

torch.tensor dtype=torch.float()

forward_sens(inp, device='cpu')[source]

Forward function to propagate through the network, but only with one input tensor that is already concatenated to allow for gradient-based sensitivity analysis

Parameters:
  • inp (torch.tensor dtype=torch.float) – Input tensor for forward propagation, shape=[batch size, flatten_size]

  • device (str, optional) – device to compute on. Needed because of the storage of the hidden cells. The default is ‘cpu’.

Returns:

output – shape=[batch size, pred_size, forecast]

Return type:

torch.tensor dtype=torch.float()

get_recurrent_weights()[source]

Function that returns the weight that effect the Recurrent input of the Network

Returns:

recurrent_weights – List of the Weights that effect the Recurren input of the Network.

Return type:

list of weight Tensors

Example

Based on the example in the introduction

>>> rec_w = m.get_recurrent_weights()
>>> print(rec_w[0].shape)
torch.Size([64, 16])
>>> print(rec_w[1].shape)
torch.Size([64, 10])
prediction(dataloader, device='cpu', sens_params=None)[source]

Prediction of a whole Time Series

Parameters:
  • dataloader (Dataloader) – Dataloader to predict output

  • device (str, optional) – device to compute on. The default is ‘cpu’.

  • sens_params (dict, optional) – Dictionary that contains the parameters for the sensitivity analysis. Key ‘method’ defines the method for sensitivity analysis: ‘gradient’ or ‘perturbation’. Key ‘comp’ defines whether gradients are computed for sensitivity analysis. Key ‘plot’ defines whether the results of the sensitivity analysis are visualized. Key ‘sens_length’ defines the number of randomly sampled subset of timesteps for the analysis. (If not a multiple of the model’s forecast, the number will be rounded up to the next multiple.) The default is None, i.e. no sensitivity analysis is computed.

Returns:

  • if comp_sens is False

  • torch.Tensor (Tensor of same langth as input, containing the predictions.)

  • if comp_sens is True – (torch.Tensor, dict) : Tuple of Tensor of same length as input and sensitivity dict. Key is the prediction type, value is the sensitivity tensor.

class softsensor.recurrent_models.RNN_DNN(input_channels, pred_size, window_size, blocks, num_layers, blocktype='LSTM', hidden_size=None, activation='relu', bias=True, dropout=None, forecast=1, Pred_Type='Point')[source]

Recurrent Network that utilises a hidden state

Parameters:
  • input_channels (int) – Number of input channels

  • pred_size (int) – Number of predicted values

  • window_size (int) – Size of the sliding window applied to the time series

  • blocks (int) – Number of parallel recurrent blocks.

  • num_layers (int) – nunmber of recurernt blocks.(depth of the recurretn part)

  • blocktype (str, optional) – blocktype, options are: ‘RNN’, ‘GRU’ and ‘LSTM’. The default is ‘LSTM’.

  • hidden_size (list of int or None, optional) – List gives the size of hidden units. The default is None.

  • activation (str, optional) – Activation function to activate the feature space. The default is ‘relu’.

  • bias (bool, optional) – If True, bias weights are used. The default is True.

  • dropout (float [0,1], optional) – Adds dropout layers after each Linear Layer. The default is None.

Return type:

None.

Examples

>>> import softsensor.recurrent_models
>>> import torch
>>> m = softsensor.recurrent_models.RNN_DNN(2, 1, 10, 16, 1)
>>> print(m)
RNN_DNN(
  (RecBlock): _LSTM(
    (lstm): LSTM(20, 16, batch_first=True)
  )
  (DNN): Feed_ForwardNN(
    (DNN): Sequential(
      (0): Linear(in_features=16, out_features=1, bias=True)
    )
  )
)
>>> input = torch.randn(32, 2, 10)
>>> output = m(input)
>>> print(output.shape)
torch.Size([32, 1, 1])
>>> import softsensor.meas_handling as ms
>>> import numpy as np
>>> import pandas as pd
>>> t = np.linspace(0, 1.0, 101)
>>> d = {'inp1': np.random.randn(101),
         'inp2': np.random.randn(101),
         'out': np.random.randn(101)}
>>> handler = ms.Meas_handling([pd.DataFrame(d, index=t)], ['train'],
                               ['inp1', 'inp2'], ['out'], fs=100)
>>> loader = handler.give_list(window_size=10, keyword='training',
                               batch_size=32, Add_zeros=True)
>>> pred = m.prediction(loader[0])
>>> print(pred.shape)
torch.Size([1, 101])
estimate_uncertainty_mean_std(inp, device='cpu')[source]
forward(inp, device='cpu')[source]

Forward function to probagate through the network

Parameters:
  • inp (torch.tensor dtype=torch.float) – Input tensor for forward propagation, shape=[batch size, external channels, window_size]

  • device (str, optional) – device to compute on. Needed because of the storage of the hidden cells. The default is ‘cpu’.

Returns:

output – shape=[batch size, pred_size, forecast]

Return type:

torch.tensor dtype=torch.float()

forward_sens(inp, device='cpu')[source]
get_recurrent_weights()[source]

Function that returns the weight that effect the Recurrent input of the Network

Returns:

recurrent_weights – List of the Weights that effect the Recurren input of the Network.

Return type:

list of weight Tensors

Example

Based on the example in the introduction

>>> rec_w = m.get_recurrent_weights()
>>> print(rec_w[0].shape)
torch.Size([64, 16])
prediction(dataloader, device='cpu', sens_params=None)[source]

Prediction of a whole Time Series

Parameters:
  • dataloader (Dataloader) – Dataloader to predict output

  • device (str, optional) – device to compute on. The default is ‘cpu’.

  • sens_params (dict, optional) – Dictionary that contains the parameters for the sensitivity analysis. Key ‘method’ defines the method for sensitivity analysis: ‘gradient’ or ‘perturbation’. Key ‘comp’ defines whether gradients are computed for sensitivity analysis. Key ‘plot’ defines whether the results of the sensitivity analysis are visualized. Key ‘sens_length’ defines the number of randomly sampled subset of timesteps for the analysis. (If not a multiple of the model’s forecast, the number will be rounded up to the next multiple.) The default is None, i.e. no sensitivity analysis is computed.

Returns:

  • if comp_sens is False

  • torch.Tensor (Tensor of same langth as input, containing the predictions.)

  • if comp_sens is True – (torch.Tensor, dict) : Tuple of Tensor of same length as input and sensitivity dict. Key is the prediction type, value is the sensitivity tensor.

class softsensor.recurrent_models.parr_RNN_DNN(input_channels, pred_size, blocks, hidden_window=1, num_layers=1, blocktype='LSTM', hidden_size=None, activation='relu', bias=True, dropout=None, forecast=1, Pred_Type='Point')[source]
estimate_uncertainty_mean_std(inp, device='cpu')[source]
forward(inp, device='cpu')[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

prediction(dataloader, device='cpu')[source]