kwcoco.metrics.functional module

kwcoco.metrics.functional.fast_confusion_matrix(y_true, y_pred, n_labels, sample_weight=None)[source]

faster version of sklearn confusion matrix that avoids the expensive checks and label rectification

Parameters:
  • y_true (ndarray) – ground truth class label for each sample

  • y_pred (ndarray) – predicted class label for each sample

  • n_labels (int) – number of labels

  • sample_weight (ndarray | None) – weight of each sample Extended typing ndarray[Any, int | Float]

Returns:

matrix where rows represent real and cols represent pred and the value at each cell is the total amount of weight Extended typing ndarray[Shape['*, *'], Int64 | Float64]

Return type:

ndarray

Example

>>> y_true = np.array([0, 0, 0, 0, 1, 1, 1, 0,  0, 1])
>>> y_pred = np.array([0, 0, 0, 0, 0, 0, 0, 1,  1, 1])
>>> fast_confusion_matrix(y_true, y_pred, 2)
array([[4, 2],
       [3, 1]]...)
>>> fast_confusion_matrix(y_true, y_pred, 2).ravel()
array([4, 2, 3, 1]...)
kwcoco.metrics.functional._truncated_roc(y_df, bg_idx=-1, fp_cutoff=None)[source]

Computes truncated ROC info

kwcoco.metrics.functional._pr_curves(y)[source]

Compute a PR curve from a method

Parameters:

y (pd.DataFrame | DataFrameArray) – output of detection_confusions

Returns:

Tuple[float, ndarray, ndarray]

Example

>>> # xdoctest: +REQUIRES(module:sklearn)
>>> import pandas as pd
>>> y1 = pd.DataFrame.from_records([
>>>     {'pred': 0, 'score': 10.00, 'true': -1, 'weight': 1.00},
>>>     {'pred': 0, 'score':  1.65, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  8.64, 'true': -1, 'weight': 1.00},
>>>     {'pred': 0, 'score':  3.97, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  1.68, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  5.06, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  0.25, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  1.75, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  8.52, 'true':  0, 'weight': 1.00},
>>>     {'pred': 0, 'score':  5.20, 'true':  0, 'weight': 1.00},
>>> ])
>>> import kwcoco as nh
>>> import kwarray
>>> y2 = kwarray.DataFrameArray(y1)
>>> _pr_curves(y2)
>>> _pr_curves(y1)
kwcoco.metrics.functional._average_precision(tpr, ppv)[source]

Compute average precision of a binary PR curve. This is simply the area under the curve.

Parameters:
  • tpr (ndarray) – true positive rate - aka recall

  • ppv (ndarray) – positive predictive value - aka precision