cellmap_segmentation_challenge.evaluate#

Functions

save_numpy_class_arrays_to_zarr(save_path, ...)

Save a list of 3D numpy arrays of binary or instance labels to a Zarr-2 file with the required structure.

save_numpy_class_labels_to_zarr(save_path, ...)

Save a single 3D numpy array of class labels to a Zarr-2 file with the required structure.

score_instance(pred_label, truth_label[, ...])

Score a single instance label volume against the ground truth instance label volume.

score_label(pred_label_path[, truth_path, ...])

Score a single label volume against the ground truth label volume.

score_semantic(pred_label, truth_label)

Score a single semantic label volume against the ground truth semantic label volume.

score_submission(submission_path[, ...])

Score a submission against the ground truth data.

score_volume(pred_volume_path[, truth_path, ...])

Score a single volume against the ground truth volume.

unzip_file(zip_path)

Unzip a zip file to a specified directory.

cellmap_segmentation_challenge.evaluate.unzip_file(zip_path)[source]#

Unzip a zip file to a specified directory.

Parameters:

zip_path (str) – The path to the zip file.

Example usage:

unzip_file(‘submission.zip’)

cellmap_segmentation_challenge.evaluate.save_numpy_class_labels_to_zarr(save_path, test_volume_name, label_name, labels, overwrite=False)[source]#

Save a single 3D numpy array of class labels to a Zarr-2 file with the required structure.

Parameters:
  • save_path (str) – The path to save the Zarr-2 file (ending with <filename>.zarr).

  • test_volume_name (str) – The name of the test volume.

  • label_names (str) – The names of the labels.

  • labels (np.ndarray) – A 3D numpy array of class labels.

Example usage:

# Generate random class labels, with 0 as background labels = np.random.randint(0, 4, (128, 128, 128)) save_numpy_labels_to_zarr(‘submission.zarr’, ‘test_volume’, [‘label1’, ‘label2’, ‘label3’], labels)

cellmap_segmentation_challenge.evaluate.save_numpy_class_arrays_to_zarr(save_path, test_volume_name, label_names, labels, overwrite=False)[source]#

Save a list of 3D numpy arrays of binary or instance labels to a Zarr-2 file with the required structure.

Parameters:
  • save_path (str) – The path to save the Zarr-2 file (ending with <filename>.zarr).

  • test_volume_name (str) – The name of the test volume.

  • label_names (list) – A list of label names corresponding to the list of 3D numpy arrays.

  • labels (list) – A list of 3D numpy arrays of binary labels.

Example usage:

label_names = [‘label1’, ‘label2’, ‘label3’] # Generate random binary volumes for each label labels = [np.random.randint(0, 2, (128, 128, 128)) for _ in range len(label_names)] save_numpy_binary_to_zarr(‘submission.zarr’, ‘test_volume’, label_names, labels)

cellmap_segmentation_challenge.evaluate.score_instance(pred_label, truth_label, hausdorff_distance_max=inf) dict[str, float][source]#

Score a single instance label volume against the ground truth instance label volume.

Parameters:
  • pred_label (np.ndarray) – The predicted instance label volume.

  • truth_label (np.ndarray) – The ground truth instance label volume.

Returns:

A dictionary of scores for the instance label volume.

Return type:

dict

Example usage:

scores = score_instance(pred_label, truth_label)

cellmap_segmentation_challenge.evaluate.score_semantic(pred_label, truth_label) dict[str, float][source]#

Score a single semantic label volume against the ground truth semantic label volume.

Parameters:
  • pred_label (np.ndarray) – The predicted semantic label volume.

  • truth_label (np.ndarray) – The ground truth semantic label volume.

Returns:

A dictionary of scores for the semantic label volume.

Return type:

dict

Example usage:

scores = score_semantic(pred_label, truth_label)

cellmap_segmentation_challenge.evaluate.score_label(pred_label_path, truth_path='data/ground_truth.zarr', instance_classes=['nuc', 'vim', 'ves', 'endo', 'lyso', 'ld', 'perox', 'mito', 'np', 'mt', 'cell', 'instance']) dict[str, float][source]#

Score a single label volume against the ground truth label volume.

Parameters:

pred_label_path (str) – The path to the predicted label volume.

Returns:

A dictionary of scores for the label volume.

Return type:

dict

Example usage:

scores = score_label(‘pred.zarr/test_volume/label1’)

cellmap_segmentation_challenge.evaluate.score_volume(pred_volume_path, truth_path='data/ground_truth.zarr', instance_classes=['nuc', 'vim', 'ves', 'endo', 'lyso', 'ld', 'perox', 'mito', 'np', 'mt', 'cell', 'instance']) dict[str, dict[str, float]][source]#

Score a single volume against the ground truth volume.

Parameters:

pred_volume_path (str) – The path to the predicted volume.

Returns:

A dictionary of scores for the volume.

Return type:

dict

Example usage:

scores = score_volume(‘pred.zarr/test_volume’)

cellmap_segmentation_challenge.evaluate.score_submission(submission_path, result_file=None, truth_path='data/ground_truth.zarr', instance_classes=['nuc', 'vim', 'ves', 'endo', 'lyso', 'ld', 'perox', 'mito', 'np', 'mt', 'cell', 'instance']) dict[str, dict[str, dict[str, float]]][source]#

Score a submission against the ground truth data.

Parameters:
  • submission_path (str) – The path to the zipped submission Zarr-2 file.

  • result_file (str) – The path to save the scores.

Returns:

A dictionary of scores for the submission.

Return type:

dict

Example usage:

scores = score_submission(‘submission.zip’)

The results json is a dictionary with the following structure: {

“volume” (the name of the ground truth volume): {
“label” (the name of the predicted class): {
(For semantic segmentation)

“iou”: (the intersection over union score), “dice_score”: (the dice score),

OR

(For instance segmentation)

“accuracy”: (the accuracy score), “haussdorf_distance”: (the haussdorf distance), “normalized_haussdorf_distance”: (the normalized haussdorf distance), “combined_score”: (the geometric mean of the accuracy and normalized haussdorf distance),

} “num_voxels”: (the number of voxels in the ground truth volume),

} “label_scores”: {

(the name of the predicted class): {
(For semantic segmentation)

“iou”: (the mean intersection over union score), “dice_score”: (the mean dice score),

OR

(For instance segmentation)

“accuracy”: (the mean accuracy score), “haussdorf_distance”: (the mean haussdorf distance), “combined_score”: (the mean geometric mean of the accuracy and haussdorf distance),

}

“overall_score”: (the mean of the combined scores across all classes),

}