Different parameter types¶
This notebook simply demonstrates the different parameter types and gives an idea of the functionality.
Copyright (c) 2021 - for information on the respective copyright owner see the NOTICE file and/or the repository https://github.com/boschresearch/parameterspace
SPDX-License-Identifier: Apache-2.0
import numpy as np
import matplotlib.pyplot as plt
import parameterspace
A simple continuous parameter¶
Let's first create a continuous parameter defined in the interval [-5, 5] and print out some information
f1 = parameterspace.ContinuousParameter(name='f1', bounds=[-5., 5.])
print(f1)
Now let't draw some samples from it:
print("Samples: ", f1.sample_values(5))
Every parameter can compute the loglikelihood of a given value (that depends on the prior, but more on that later).
def likelihood_plots(parameter, num_samples=1024, cdf_plot=True):
xs = np.linspace(parameter.bounds[0]-1, parameter.bounds[1]+1, 256)
likelihoods = np.exp([parameter.loglikelihood(x) for x in xs])
plt.plot(xs, likelihoods)
plt.title('prior likelihood')
plt.xlabel(r'$%s$'%parameter.name)
plt.ylabel('pdf')
plt.show()
plt.hist(parameter.sample_values(num_samples=num_samples), density=True)
plt.title('empirical PDF based on sampled values')
plt.show()
if cdf_plot:
plt.hist(parameter.sample_values(num_samples=num_samples), density=True, cumulative=True, bins=64)
plt.title('empirical CDF based on sampled values')
plt.show()
likelihood_plots(f1)
Defining a prior¶
To make things more useful, every parameter has a prior associated to it. The default is a uninformed prior ( aka a uniform distribution), but other more interesting ones are already there, namely:
- the trucated normal distribution
- the Beta distribution
- a categorical distribution
Because the numerical representation of the parameters is mapped into the unit hypecube, all priors must be defined in the transformed range. As an example, let us consider a truncated normal prior for a continuous parameter in the interval $[-5, 5]$. If we want to place a prior with mean 0 and a standard deviation of $2.5$, we would use a TruncatedNormal with
- $mean=0.5$, because the interval is mapped linearly onto $[0,1]$, i.e. the original value of $0$ is mapped to $0.5$
- $std=0.25$, because the original interval has a width of $10$, and $2.5$ is a quater of that. Therefore, the mapped standard deviation must be a quater of the mapped inteval length, which is one.
f2_prior = parameterspace.priors.TruncatedNormal(mean=.5, std=.25)
f2 = parameterspace.ContinuousParameter(name='f_1', bounds=[-5., 5.], prior=f2_prior)
print(f2)
likelihood_plots(f2)
Beta Prior¶
Here is another exapmle using a Beta prior in the transformed space
bounds = [-3, 5]
i1_prior = parameterspace.priors.Beta(a=2, b=0.5)
i1 = parameterspace.ContinuousParameter(name='i_1', bounds=bounds, prior=i1_prior)
print('Samples: ', i1.sample_values(5))
likelihood_plots(i1)
bounds = [1, 256]
i2 = parameterspace.IntegerParameter(name='i_2', bounds=bounds, transformation='log')
print('Samples: ', i2.sample_values(32))
likelihood_plots(i2)
Categorical parameters are also supported¶
Categorical parameters do not have an intrinsic ordering, so the only meaningful prior defines the probability for each value. The values can be of 'any type' even mixing types is possible.
values = ['foo', 'bar', 42, np.pi]
c1_prior = [3, 1, 1, 1.5]
c1 = parameterspace.CategoricalParameter(name='c_1', values=values, prior=c1_prior)
print(c1)
print('Samples: ', c1.sample_values(num_samples=10))
plt.bar([0,1,2,3], np.exp([c1.loglikelihood(value) for value in values]))
plt.xticks([0,1,2,3], values)
plt.title('prior likelihood')
plt.xlabel(r'$%s$'%c1.name)
plt.ylabel('pdf')
plt.show()
c1.sample_values(num_samples=100)