Welcome to PROPEL Loss’s documentation!¶
-
class
propel_loss.propel.
PROPEL
(sigma=0.1, reduction='mean')[source]¶ Bases:
torch.nn.modules.loss._Loss
PRObabilistic Parametric rEgression Loss (PROPEL) for enabling neural networks to output parameters of a mixture of Gaussian distributions from [1].
[1] “PROPEL: Probabilistic Parametric Regression Loss for Convolutional Neural Networks”, M. Asad et al. - 25th International Conference on Pattern Recognition (ICPR), 2020
Usage instructions: In order to use the loss function, the expected output shape from neural network is: [num_batches, num_gaussians, 2*num_dimensions]
where,
num_batches –> number of batches num_gaussians –> number of Gaussians in mixture of Gaussians num_dimensions –> number of dimensions in each sample - 2 accounts for mean/variance for each dimension
num_gaussians can be set to a number corresponding to how complex you wish your mixture of Gaussian distribution, e.g. num_gaussians = 2 vs num_gaussians = 10 (where 10 can model much more complex distribution whereas 2 will apply regularisation affect by trying to model a gt distribution with 2 Gaussians in mixture)
One example is of inferring 3D head orientation from 2D images. The output is 3D (num_dimensions = 3) and if we use two Gaussians with a num_batch=2, then output will be of size
[b, 2, 2 * num_dimensions]
= [b, 2, 6] - shape for output of the network
For further usage examples, see optimisation tests inside tests/ folder within this project directory.
-
reduction
: str¶
-
-
propel_loss.propel.
g_function
(muM1, sigmaM1, muM2, sigmaM2)[source]¶ G Function implementation
Implements the following equation:
\[G(P_i, P_j) = \frac{e^{\big[\frac{2\mu_{x_{1i}}\mu_{x_{1j}} - {\mu_{x_{1i}}}^2 - {\mu_{x_{1j}}}^2}{2(\sigma_{x_{1i}}+\sigma_{x_{1j}})} + \cdots + \frac{2\mu_{x_{ni}}\mu_{x_{nj}} - {\mu_{x_{ni}}}^2 - {\mu_{x_{nj}}}^2}{2(\sigma_{x_{ni}}+\sigma_{x_{nj}})}\big]}}{(\sqrt{2\pi})^n \sqrt{(\sigma_{x_{1i}} + \sigma_{x_{1j}}) \cdots (\sigma_{x_{ni}} + \sigma_{x_{nj}})}}\]- Parameters
muM1 (torch.tensor) – mean for first Gaussian distribution
sigmaM1 (torch.tensor) – standard deviation for first Gaussian distribution
muM2 (torch.tensor) – mean for second Gaussian distribution
sigmaM2 (torch.tensor) – standard deviation for second Gaussian distribution
- Returns
result of G(P_1, P_2)
- Return type
torch.tensor
-
propel_loss.propel.
getloss
(muG, sigmaG, muM, sigmaM)[source]¶ PROPEL Loss function Implements the following equations:
- In forward pass:
- \[L = -\log\underbrace{\left[ \frac{2}{I} \sum_{i=1}^{I} G(P_{gt}, P_{i}) \right]\rule[-12pt]{0pt}{5pt}}_{\mbox{$T1$}} + \log \underbrace{\left[H({P_{gt}}) + \frac{1}{I^2}\sum_{i=1}^{I} H({P_{i}}) + \frac{2}{I^2} \sum_{i < j}^{I} G({P_{i},P_{j}}) \right]\rule[-12pt]{0pt}{5pt}}_{\mbox{$T2$}}\]
- In backward pass:
- \[\frac{\partial L}{\partial \mu_{x_{ni}}} = -\frac{1}{T1}\left[ \frac{\partial G(P_{gt}, P_{i})}{\partial \mu_{x_{ni}}} \right] + \frac{1}{T2} \left[ \frac{2}{I^2} \sum_{i < j}^{I} \frac{\partial G({P_{i},P_{j}})}{\partial \mu_{x_{ni}}} \right]\]\[\frac{\partial L}{\partial \sigma_{x_{ni}}} = -\frac{1}{T1}\left[ \frac{\partial G(P_{gt}, P_{i})}{\partial \sigma_{x_{ni}}} \right] + \frac{1}{T2} \left[ \frac{1}{I^2} \frac{\partial H({P_{i}})}{\partial \sigma_{x_{ni}}} + \frac{2}{I^2} \sum_{i < j}^{I} \frac{\partial G({P_{i},P_{j}})}{\partial \sigma_{x_{ni}}} \right]\]
- Parameters
muG (torch.tensor) – mean for groundtruth Gaussian distribution
sigmaG (torch.tensor) – standard deviation for groundtruth Gaussian distribution
muM (torch.tensor) – mean for model Mixture of Gaussian distribution (model output)
sigmaM (torch.tensor) – standard deviation for Mixture of Gaussian distribution (model output)
- Returns
computed loss in forward pass, gradients w.r.t muM/sigmaM in backward pass
- Return type
torch.tensor
-
propel_loss.propel.
h_function
(sigmaM)[source]¶ H function implementation
Implements the following equation:
\[H(P_i) = \frac{1}{(2\sqrt{\pi})^n \sqrt{\sigma_{x_{1i}}\cdots\sigma_{x_{ni}}}}\]- Parameters
sigmaM (torch.tensor) – standard deviation of our input Gaussian distribution
- Returns
result of H(P_m)
- Return type
torch.tensor
-
propel_loss.propel.
unpack_prediction
(pred, num_dims)[source]¶ Helper function to unpack tensor coming from output of neural network
It expects the pred tensor to have the following shape: [num_batch, num_gaussians, num_dimensions * 2] where:
First [num_batch, num_gaussians, ::num_dimensions] correspond to mean
Second [num_batch, num_gaussians, num_dimensions::] correspond to standard deviation
- Parameters
pred (torch.tensor) – prediction output from a neural network with shape [num_batch, num_gaussians, num_dimensions]
num_dims (int) – number of dimensions to unpack data for, e.g. 3 for 3D problems
- Returns
unpacked mean (g_mu) and standard deviation (g_sigma)
- Return type
tuple of torch.tensors