class SupervisedCriterion[source]

SupervisedCriterion(name:str, differentiable:bool=True, lower_is_better:bool=True, compute_only_on_design_space:bool=True) :: Criterion

A parent class that inherits all supervised criteria for both classical and learned methods.

Type Default Details
name str The name of this criterion which will be monitored in logging.
differentiable bool True Whether the criterion is differentiable or not. Only differentiable criteria can be used as loss/objective functions.
lower_is_better bool True Whether lower values of the criterion correspond to better scores.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

SupervisedCriterion.__call__[source]

SupervisedCriterion.__call__(solutions:list, gt_solutions:list=None, binary:bool=False)

Calculates the output of the criterion for all solutions.

Type Default Details
solutions list The solutions that should be evaluated with the criterion.
gt_solutions list None Ground truth solutions that are compared element-wise with the solutions.
binary bool False Whether the criterion should be evaluated on binarized densities. Does not have an effect on some criteria.

class WeightedBCE[source]

WeightedBCE(weight:float=0.5, compute_only_on_design_space:bool=True) :: SupervisedCriterion

Weighted Binary cross entropy [1] is a variant of binary cross entropy variant. The weight value can be used to tune false negatives and false positives. E.g; If you want to reduce the number of false negatives then set weight > 1, similarly to decrease the number of false positives, set weight < 1. The criterion reaches its best value at 0 and higher values correspond to worse scores.

Type Default Details
weight float 0.5 The weight of the weighted binary cross entropy function which is used to take class imbalance into account.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

WeightedBCE.set_optimal_weight[source]

WeightedBCE.set_optimal_weight(dataset:dl4to.dataset.TopoDataset, binary:bool=False)

Calculates the optimal BCE weight based on the solutions in the dataset.

Type Default Details
dataset dl4to.dataset.TopoDataset The dataset based on which the optimal weight is determined.
binary bool False Whether the densities in the solutions are thresholded at 0.5 before the weight is determined.

class WeightedFocal[source]

WeightedFocal(weight:float=0.5, γ:float=3, ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

Focal loss [2] can be seen as variation of Binary Cross-Entropy. It down-weights the contribution of easy examples and enables the model to focus more on learning hard examples. It works well for highly imbalanced class scenarios. The criterion reaches its best value at 0 and higher values correspond to worse scores.

Type Default Details
weight float 0.5 The weight of the weighted focal function which is used to take class imbalance into account.
γ float 3 $γ\geq0$ is the tunable focusing parameter. Setting $γ>0$ reduces the relative loss for well-classified examples, putting more focus on hard, misclassified examples.
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

WeightedFocal.set_optimal_weight[source]

WeightedFocal.set_optimal_weight(dataset:dl4to.dataset.TopoDataset, binary:bool=False)

Calculates the optimal BCE weight based on the solutions in the dataset.

Type Default Details
dataset dl4to.dataset.TopoDataset The dataset based on which the optimal weight is determined.
binary bool False Whether the densities in the solutions are thresholded at 0.5 before the weight is determined.

class Dice[source]

Dice(ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

The Dice coefficient is widely used metric in computer vision community to calculate the similarity between two images. Later in 2016, it has also been adapted as loss function known as Dice Loss [3]. It is also sometimes refered to as the F1 score [4]. Dice reaches its best value at 0 and its worst value at 1.

Type Default Details
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

class Tversky[source]

Tversky(α:float=0.5, ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

Tversky index [5] can be seen as a generalization of the Dice coefficient. It adds a weight to false positives and false negatives. By setting the value of α > 0.5, we can penalise false negatives more. This becomes useful in highly imbalanced datasets where the additional level of control over the loss function yields better small scale segmentations than the normal dice coefficient. Just like dice, this criterion reaches its best value at 0 and its worst value at 1.

Type Default Details
α float 0.5 The Tversky weight. When $α=0.5$, it can be solved into the regular Dice coefficient.
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

class FocalTversky[source]

FocalTversky(α:float=0.5, γ:float=3, ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

The Focal Tversky Loss [6] is a generalisation of the Tversky loss. The non-linear nature of the loss gives control over how the loss behaves at different values of the Tversky index obtained. Similar to Focal Loss, which focuses on hard examples by down-weighting easy ones. Focal Tversky loss also attempts to learn hard-examples such with the help of γ, which controls the non-linearity of the loss. This criterion reaches its best value at 0, while higher values correspond to worse scores.

Type Default Details
α float 0.5 The Tversky weight. When $α=0.5$, it can be solved into the regular Dice coefficient.
γ float 3 $γ\geq0$ is the Focal loss focusing parameter. Setting $γ>0$ reduces the relative loss for well-classified examples, putting more focus on hard, misclassified examples.
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

class IoU[source]

IoU(ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

The Intersection over Union (IoU) metric, also referred to as the Jaccard index, is essentially a method to quantify the percent overlap between the target mask and our prediction output. This metric is closely related to the Dice coefficient which is often used as a loss function during training. The IoU metric measures the number of pixels common between the target and prediction masks divided by the total number of pixels present across both masks. IoU reaches its best value at 1 and its worst value at 0, i.e., higher values are better.

Type Default Details
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

class VoxelAccuracy[source]

VoxelAccuracy(ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

The voxel accuracy loss is a three-dimensional version of the pixel accuracy loss [7]. It reports the percent of voxels which are correctly classified. This metric can sometimes provide misleading results when the class representation is small within the image, as the measure will be biased in mainly reporting how well you identify negative case (ie. where the class is not present). Voxel accuracy reaches its best value at 1 and its worst value at 0, i.e., higher values are better.

Type Default Details
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

class BalancedVoxelAccuracy[source]

BalancedVoxelAccuracy(ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

The balanced voxel accuracy loss [9] is a balanced version of the voxel accuracy criterion and can also be interpreted as a rescaled version of the "Youden index" [10]. That makes it a better metric to use with imbalanced data. It is defined as the average of recall obtained on each class. The criterion reaches its best value at 1 and its worst value at 0, i.e., higher values are better.

Type Default Details
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

class L2Accuracy[source]

L2Accuracy(ε:float=1e-06, compute_only_on_design_space:bool=True) :: SupervisedCriterion

The L2 accuracy loss [8] reports the root mean squared accuracy of the predictions. The criterion reaches its best value at 1 and its worst value at 0, i.e., higher values are better.

Type Default Details
ε float 1e-06 A small value $>0$ that avoids division by $0$ and therefore improves numerical stability.
compute_only_on_design_space bool True Whether the criterion should be evaluated on voxels that have a design space information of -1, i.e., the voxels can be freely optimized. This parameter does not effect all criteria.

References

[1] Pihur, Vasyl, Susmita Datta, and Somnath Datta. "Weighted rank aggregation of cluster validation measures: a monte carlo cross-entropy approach." Bioinformatics 23.13 (2007): 1607-1615

[2] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

[3] Sudre, Carole H., et al. "Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations." Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings 3. Springer International Publishing, 2017.

[4] Taha, Abdel Aziz, and Allan Hanbury. "Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool." BMC medical imaging 15.1 (2015): 1-28.

[5] Salehi, Seyed Sadegh Mohseni, Deniz Erdogmus, and Ali Gholipour. "Tversky loss function for image segmentation using 3D fully convolutional deep networks." Machine Learning in Medical Imaging: 8th International Workshop, MLMI 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 10, 2017, Proceedings 8. Springer International Publishing, 2017.

[6] Abraham, Nabila, and Naimul Mefraz Khan. "A novel focal tversky loss function with improved attention u-net for lesion segmentation." 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019). IEEE, 2019.

[7] Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

[8] Banga, Saurabh, et al. "3d topology optimization using convolutional neural networks." arXiv preprint arXiv:1808.07440 (2018).

[9] Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-24.

[10] Youden, William J. "Index for rating diagnostic tests." Cancer 3.1 (1950): 32-35.