kwcoco.metrics

mkinit kwcoco.metrics -w –relative

Submodules

Package Contents

Classes

DetectionMetrics

Object that computes associations between detections and can convert them

BinaryConfusionVectors

Stores information about a binary classification problem.

ConfusionVectors

Stores information used to construct a confusion matrix. This includes

Measures

Example

OneVsRestConfusionVectors

Container for multiple one-vs-rest binary confusion vectors

PerClass_Measures

Functions

eval_detections_cli(**kw)

DEPRECATED USE kwcoco eval instead

class kwcoco.metrics.DetectionMetrics(dmet, classes=None)[source]

Bases: ubelt.NiceRepr

Object that computes associations between detections and can convert them into sklearn-compatible representations for scoring.

Variables
  • gid_to_true_dets (Dict) – maps image ids to truth

  • gid_to_pred_dets (Dict) – maps image ids to predictions

  • classes (CategoryTree) – category coder

Example

>>> dmet = DetectionMetrics.demo(
>>>     nimgs=100, nboxes=(0, 3), n_fp=(0, 1), classes=8, score_noise=0.9, hacked=False)
>>> print(dmet.score_kwcoco(bias=0, compat='mutex', prioritize='iou')['mAP'])
...
>>> # NOTE: IN GENERAL NETHARN AND VOC ARE NOT THE SAME
>>> print(dmet.score_voc(bias=0)['mAP'])
0.8582...
>>> #print(dmet.score_coco()['mAP'])
score_coco
clear(dmet)
__nice__(dmet)
classmethod from_coco(DetectionMetrics, true_coco, pred_coco, gids=None, verbose=0)

Create detection metrics from two coco files representing the truth and predictions.

Parameters
  • true_coco (kwcoco.CocoDataset)

  • pred_coco (kwcoco.CocoDataset)

Example

>>> import kwcoco
>>> from kwcoco.demo.perterb import perterb_coco
>>> true_coco = kwcoco.CocoDataset.demo('shapes')
>>> perterbkw = dict(box_noise=0.5, cls_noise=0.5, score_noise=0.5)
>>> pred_coco = perterb_coco(true_coco, **perterbkw)
>>> self = DetectionMetrics.from_coco(true_coco, pred_coco)
>>> self.score_voc()
_register_imagename(dmet, imgname, gid=None)
add_predictions(dmet, pred_dets, imgname=None, gid=None)

Register/Add predicted detections for an image

Parameters
  • pred_dets (Detections) – predicted detections

  • imgname (str) – a unique string to identify the image

  • gid (int, optional) – the integer image id if known

add_truth(dmet, true_dets, imgname=None, gid=None)

Register/Add groundtruth detections for an image

Parameters
  • true_dets (Detections) – groundtruth

  • imgname (str) – a unique string to identify the image

  • gid (int, optional) – the integer image id if known

true_detections(dmet, gid)

gets Detections representation for groundtruth in an image

pred_detections(dmet, gid)

gets Detections representation for predictions in an image

confusion_vectors(dmet, iou_thresh=0.5, bias=0, gids=None, compat='mutex', prioritize='iou', ignore_classes='ignore', background_class=ub.NoParam, verbose='auto', workers=0, track_probs='try', max_dets=None)

Assigns predicted boxes to the true boxes so we can transform the detection problem into a classification problem for scoring.

Parameters
  • iou_thresh (float | List[float], default=0.5) – bounding box overlap iou threshold required for assignment if a list, then return type is a dict

  • bias (float, default=0.0) – for computing bounding box overlap, either 1 or 0

  • gids (List[int], default=None) – which subset of images ids to compute confusion metrics on. If not specified all images are used.

  • compat (str, default=’all’) – can be (‘ancestors’ | ‘mutex’ | ‘all’). determines which pred boxes are allowed to match which true boxes. If ‘mutex’, then pred boxes can only match true boxes of the same class. If ‘ancestors’, then pred boxes can match true boxes that match or have a coarser label. If ‘all’, then any pred can match any true, regardless of its category label.

  • prioritize (str, default=’iou’) – can be (‘iou’ | ‘class’ | ‘correct’) determines which box to assign to if mutiple true boxes overlap a predicted box. if prioritize is iou, then the true box with maximum iou (above iou_thresh) will be chosen. If prioritize is class, then it will prefer matching a compatible class above a higher iou. If prioritize is correct, then ancestors of the true class are preferred over descendents of the true class, over unreleated classes.

  • ignore_classes (set, default={‘ignore’}) – class names indicating ignore regions

  • background_class (str, default=ub.NoParam) – Name of the background class. If unspecified we try to determine it with heuristics. A value of None means there is no background class.

  • verbose (int, default=’auto’) – verbosity flag. In auto mode, verbose=1 if len(gids) > 1000.

  • workers (int, default=0) – number of parallel assignment processes

  • track_probs (str, default=’try’) – can be ‘try’, ‘force’, or False. if truthy, we assume probabilities for multiple classes are available.

Returns

ConfusionVectors | Dict[float, ConfusionVectors]

Example

>>> dmet = DetectionMetrics.demo(nimgs=30, classes=3,
>>>                              nboxes=10, n_fp=3, box_noise=10,
>>>                              with_probs=False)
>>> iou_to_cfsn = dmet.confusion_vectors(iou_thresh=[0.3, 0.5, 0.9])
>>> for t, cfsn in iou_to_cfsn.items():
>>>     print('t = {!r}'.format(t))
...     print(cfsn.binarize_ovr().measures())
...     print(cfsn.binarize_classless().measures())
score_kwant(dmet, iou_thresh=0.5)

Scores the detections using kwant

score_kwcoco(dmet, iou_thresh=0.5, bias=0, gids=None, compat='all', prioritize='iou')

our scoring method

score_voc(dmet, iou_thresh=0.5, bias=1, method='voc2012', gids=None, ignore_classes='ignore')

score using voc method

Example

>>> dmet = DetectionMetrics.demo(
>>>     nimgs=100, nboxes=(0, 3), n_fp=(0, 1), classes=8,
>>>     score_noise=.5)
>>> print(dmet.score_voc()['mAP'])
0.9399...
_to_coco(dmet)

Convert to a coco representation of truth and predictions

with inverse aid mappings

score_pycocotools(dmet, with_evaler=False, with_confusion=False, verbose=0, iou_thresholds=None)

score using ms-coco method

Returns

dictionary with pct info

Return type

Dict

Example

>>> # xdoctest: +REQUIRES(module:pycocotools)
>>> from kwcoco.metrics.detect_metrics import *
>>> dmet = DetectionMetrics.demo(
>>>     nimgs=10, nboxes=(0, 3), n_fn=(0, 1), n_fp=(0, 1), classes=8, with_probs=False)
>>> pct_info = dmet.score_pycocotools(verbose=1,
>>>                                   with_evaler=True,
>>>                                   with_confusion=True,
>>>                                   iou_thresholds=[0.5, 0.9])
>>> evaler = pct_info['evaler']
>>> iou_to_cfsn_vecs = pct_info['iou_to_cfsn_vecs']
>>> for iou_thresh in iou_to_cfsn_vecs.keys():
>>>     print('iou_thresh = {!r}'.format(iou_thresh))
>>>     cfsn_vecs = iou_to_cfsn_vecs[iou_thresh]
>>>     ovr_measures = cfsn_vecs.binarize_ovr().measures()
>>>     print('ovr_measures = {}'.format(ub.repr2(ovr_measures, nl=1, precision=4)))

Note

by default pycocotools computes average precision as the literal average of computed precisions at 101 uniformly spaced recall thresholds.

pycocoutils seems to only allow predictions with the same category as the truth to match those truth objects. This should be the same as calling dmet.confusion_vectors with compat = mutex

pycocoutils does not take into account the fact that each box often has a score for each category.

pycocoutils will be incorrect if any annotation has an id of 0

a major difference in the way kwcoco scores versus pycocoutils is the calculation of AP. The assignment between truth and predicted detections produces similar enough results. Given our confusion vectors we use the scikit-learn definition of AP, whereas pycocoutils seems to compute precision and recall — more or less correctly — but then it resamples the precision at various specified recall thresholds (in the accumulate function, specifically how pr is resampled into the q array). This can lead to a large difference in reported scores.

pycocoutils also smooths out the precision such that it is monotonic decreasing, which might not be the best idea.

pycocotools area ranges are inclusive on both ends, that means the “small” and “medium” truth selections do overlap somewhat.

classmethod demo(cls, **kwargs)

Creates random true boxes and predicted boxes that have some noisy offset from the truth.

Kwargs:
classes (int, default=1): class list or the number of foreground

classes.

nimgs (int, default=1): number of images in the coco datasts. nboxes (int, default=1): boxes per image. n_fp (int, default=0): number of false positives. n_fn (int, default=0): number of false negatives. box_noise (float, default=0): std of a normal distribution used to

perterb both box location and box size.

cls_noise (float, default=0): probability that a class label will

change. Must be within 0 and 1.

anchors (ndarray, default=None): used to create random boxes null_pred (bool, default=0):

if True, predicted classes are returned as null, which means only localization scoring is suitable.

with_probs (bool, default=1):

if True, includes per-class probabilities with predictions

CommandLine

xdoctest -m kwcoco.metrics.detect_metrics DetectionMetrics.demo:2 --show

Example

>>> kwargs = {}
>>> # Seed the RNG
>>> kwargs['rng'] = 0
>>> # Size parameters determine how big the data is
>>> kwargs['nimgs'] = 5
>>> kwargs['nboxes'] = 7
>>> kwargs['classes'] = 11
>>> # Noise parameters perterb predictions further from the truth
>>> kwargs['n_fp'] = 3
>>> kwargs['box_noise'] = 0.1
>>> kwargs['cls_noise'] = 0.5
>>> dmet = DetectionMetrics.demo(**kwargs)
>>> print('dmet.classes = {}'.format(dmet.classes))
dmet.classes = <CategoryTree(nNodes=12, maxDepth=3, maxBreadth=4...)>
>>> # Can grab kwimage.Detection object for any image
>>> print(dmet.true_detections(gid=0))
<Detections(4)>
>>> print(dmet.pred_detections(gid=0))
<Detections(7)>

Example

>>> # Test case with null predicted categories
>>> dmet = DetectionMetrics.demo(nimgs=30, null_pred=1, classes=3,
>>>                              nboxes=10, n_fp=3, box_noise=0.1,
>>>                              with_probs=False)
>>> dmet.gid_to_pred_dets[0].data
>>> dmet.gid_to_true_dets[0].data
>>> cfsn_vecs = dmet.confusion_vectors()
>>> binvecs_ovr = cfsn_vecs.binarize_ovr()
>>> binvecs_per = cfsn_vecs.binarize_classless()
>>> measures_per = binvecs_per.measures()
>>> measures_ovr = binvecs_ovr.measures()
>>> print('measures_per = {!r}'.format(measures_per))
>>> print('measures_ovr = {!r}'.format(measures_ovr))
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> measures_ovr['perclass'].draw(key='pr', fnum=2)

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> from kwcoco.metrics.detect_metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     n_fp=(0, 1), n_fn=(0, 1), nimgs=32, nboxes=(0, 16),
>>>     classes=3, rng=0, newstyle=1, box_noise=0.5, cls_noise=0.0, score_noise=0.3, with_probs=False)
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> summary = dmet.summarize(plot=True, title='DetectionMetrics summary demo', with_ovr=True, with_bin=False)
>>> summary['bin_measures']
>>> kwplot.show_if_requested()
summarize(dmet, out_dpath=None, plot=False, title='', with_bin='auto', with_ovr='auto')

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> from kwcoco.metrics.detect_metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     n_fp=(0, 128), n_fn=(0, 4), nimgs=512, nboxes=(0, 32),
>>>     classes=3, rng=0)
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> dmet.summarize(plot=True, title='DetectionMetrics summary demo')
>>> kwplot.show_if_requested()
kwcoco.metrics.eval_detections_cli(**kw)[source]

DEPRECATED USE kwcoco eval instead

CommandLine

xdoctest -m ~/code/kwcoco/kwcoco/metrics/detect_metrics.py eval_detections_cli
class kwcoco.metrics.BinaryConfusionVectors(data, cx=None, classes=None)[source]

Bases: ubelt.NiceRepr

Stores information about a binary classification problem. This is always with respect to a specific class, which is given by cx and classes.

The data DataFrameArray must contain

is_true - if the row is an instance of class classes[cx] pred_score - the predicted probability of class classes[cx], and weight - sample weight of the example

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> self = BinaryConfusionVectors.demo(n=10)
>>> print('self = {!r}'.format(self))
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> self = BinaryConfusionVectors.demo(n=0)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> self = BinaryConfusionVectors.demo(n=1)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> self = BinaryConfusionVectors.demo(n=2)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
classmethod demo(cls, n=10, p_true=0.5, p_error=0.2, p_miss=0.0, rng=None)

Create random data for tests

Parameters
  • n (int) – number of rows

  • p_true (int) – fraction of real positive cases

  • p_error (int) – probability of making a recoverable mistake

  • p_miss (int) – probability of making a unrecoverable mistake

  • rng (int | RandomState) – random seed / state

Returns

BinaryConfusionVectors

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> cfsn = BinaryConfusionVectors.demo(n=1000, p_error=0.1, p_miss=0.1)
>>> measures = cfsn.measures()
>>> print('measures = {}'.format(ub.repr2(measures, nl=1)))
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.figure(fnum=1, pnum=(1, 2, 1))
>>> measures.draw('pr')
>>> kwplot.figure(fnum=1, pnum=(1, 2, 2))
>>> measures.draw('roc')
property catname(self)
__nice__(self)
__len__(self)
measures(self, stabalize_thresh=7, fp_cutoff=None, monotonic_ppv=True, ap_method='pycocotools')

Get statistics (F1, G1, MCC) versus thresholds

Parameters
  • stabalize_thresh (int, default=7) – if fewer than this many data points inserts dummy stabalization data so curves can still be drawn.

  • fp_cutoff (int, default=None) – maximum number of false positives in the truncated roc curves. None is equivalent to float('inf')

  • monotonic_ppv (bool, default=True) – if True ensures that precision is always increasing as recall decreases. This is done in pycocotools scoring, but I’m not sure its a good idea.

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> self = BinaryConfusionVectors.demo(n=0)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> self = BinaryConfusionVectors.demo(n=1, p_true=0.5, p_error=0.5)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> self = BinaryConfusionVectors.demo(n=3, p_true=0.5, p_error=0.5)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> self = BinaryConfusionVectors.demo(n=100, p_true=0.5, p_error=0.5, p_miss=0.3)
>>> print('measures = {}'.format(ub.repr2(self.measures())))
>>> print('measures = {}'.format(ub.repr2(ub.odict(self.measures()))))

References

https://en.wikipedia.org/wiki/Confusion_matrix https://en.wikipedia.org/wiki/Precision_and_recall https://en.wikipedia.org/wiki/Matthews_correlation_coefficient

_binary_clf_curves(self, stabalize_thresh=7, fp_cutoff=None)

Compute TP, FP, TN, and FN counts for this binary confusion vector.

Code common to ROC, PR, and threshold measures, computes the elements of the binary confusion matrix at all relevant operating point thresholds.

Parameters
  • stabalize_thresh (int) – if fewer than this many data points insert stabalization data.

  • fp_cutoff (int) – maximum number of false positives

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> self = BinaryConfusionVectors.demo(n=1, p_true=0.5, p_error=0.5)
>>> self._binary_clf_curves()
>>> self = BinaryConfusionVectors.demo(n=0, p_true=0.5, p_error=0.5)
>>> self._binary_clf_curves()
>>> self = BinaryConfusionVectors.demo(n=100, p_true=0.5, p_error=0.5)
>>> self._binary_clf_curves()
draw_distribution(self)
_3dplot(self)

Example

>>> # xdoctest: +REQUIRES(module:kwplot)
>>> # xdoctest: +REQUIRES(module:pandas)
>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> from kwcoco.metrics.detect_metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     n_fp=(0, 1), n_fn=(0, 2), nimgs=256, nboxes=(0, 10),
>>>     bbox_noise=10,
>>>     classes=1)
>>> cfsn_vecs = dmet.confusion_vectors()
>>> self = bin_cfsn = cfsn_vecs.binarize_classless()
>>> #dmet.summarize(plot=True)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.figure(fnum=3)
>>> self._3dplot()
class kwcoco.metrics.ConfusionVectors(cfsn_vecs, data, classes, probs=None)[source]

Bases: ubelt.NiceRepr

Stores information used to construct a confusion matrix. This includes corresponding vectors of predicted labels, true labels, sample weights, etc…

Variables
  • data (kwarray.DataFrameArray) – should at least have keys true, pred, weight

  • classes (Sequence | CategoryTree) – list of category names or category graph

  • probs (ndarray, optional) – probabilities for each class

Example

>>> # xdoctest: IGNORE_WANT
>>> # xdoctest: +REQUIRES(module:pandas)
>>> from kwcoco.metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     nimgs=10, nboxes=(0, 10), n_fp=(0, 1), classes=3)
>>> cfsn_vecs = dmet.confusion_vectors()
>>> print(cfsn_vecs.data._pandas())
     pred  true   score  weight     iou  txs  pxs  gid
0       2     2 10.0000  1.0000  1.0000    0    4    0
1       2     2  7.5025  1.0000  1.0000    1    3    0
2       1     1  5.0050  1.0000  1.0000    2    2    0
3       3    -1  2.5075  1.0000 -1.0000   -1    1    0
4       2    -1  0.0100  1.0000 -1.0000   -1    0    0
5      -1     2  0.0000  1.0000 -1.0000    3   -1    0
6      -1     2  0.0000  1.0000 -1.0000    4   -1    0
7       2     2 10.0000  1.0000  1.0000    0    5    1
8       2     2  8.0020  1.0000  1.0000    1    4    1
9       1     1  6.0040  1.0000  1.0000    2    3    1
..    ...   ...     ...     ...     ...  ...  ...  ...
62     -1     2  0.0000  1.0000 -1.0000    7   -1    7
63     -1     3  0.0000  1.0000 -1.0000    8   -1    7
64     -1     1  0.0000  1.0000 -1.0000    9   -1    7
65      1    -1 10.0000  1.0000 -1.0000   -1    0    8
66      1     1  0.0100  1.0000  1.0000    0    1    8
67      3    -1 10.0000  1.0000 -1.0000   -1    3    9
68      2     2  6.6700  1.0000  1.0000    0    2    9
69      2     2  3.3400  1.0000  1.0000    1    1    9
70      3    -1  0.0100  1.0000 -1.0000   -1    0    9
71     -1     2  0.0000  1.0000 -1.0000    2   -1    9
>>> # xdoctest: +REQUIRES(--show)
>>> # xdoctest: +REQUIRES(module:pandas)
>>> import kwplot
>>> kwplot.autompl()
>>> from kwcoco.metrics.confusion_vectors import ConfusionVectors
>>> cfsn_vecs = ConfusionVectors.demo(
>>>     nimgs=128, nboxes=(0, 10), n_fp=(0, 3), n_fn=(0, 3), classes=3)
>>> cx_to_binvecs = cfsn_vecs.binarize_ovr()
>>> measures = cx_to_binvecs.measures()['perclass']
>>> print('measures = {!r}'.format(measures))
measures = <PerClass_Measures({
    'cat_1': <Measures({'ap': 0.227, 'auc': 0.507, 'catname': cat_1, 'max_f1': f1=0.45@0.47, 'nsupport': 788.000})>,
    'cat_2': <Measures({'ap': 0.288, 'auc': 0.572, 'catname': cat_2, 'max_f1': f1=0.51@0.43, 'nsupport': 788.000})>,
    'cat_3': <Measures({'ap': 0.225, 'auc': 0.484, 'catname': cat_3, 'max_f1': f1=0.46@0.40, 'nsupport': 788.000})>,
}) at 0x7facf77bdfd0>
>>> kwplot.figure(fnum=1, doclf=True)
>>> measures.draw(key='pr', fnum=1, pnum=(1, 3, 1))
>>> measures.draw(key='roc', fnum=1, pnum=(1, 3, 2))
>>> measures.draw(key='mcc', fnum=1, pnum=(1, 3, 3))
...
__nice__(cfsn_vecs)
__json__(self)

Serialize to json

Example

>>> # xdoctest: +REQUIRES(module:pandas)
>>> from kwcoco.metrics import ConfusionVectors
>>> self = ConfusionVectors.demo(n_imgs=1, classes=2, n_fp=0, nboxes=1)
>>> state = self.__json__()
>>> print('state = {}'.format(ub.repr2(state, nl=2, precision=2, align=1)))
>>> recon = ConfusionVectors.from_json(state)
classmethod from_json(cls, state)
classmethod demo(cfsn_vecs, **kw)
Parameters

**kwargs – See kwcoco.metrics.DetectionMetrics.demo()

Returns

ConfusionVectors

Example

>>> cfsn_vecs = ConfusionVectors.demo()
>>> print('cfsn_vecs = {!r}'.format(cfsn_vecs))
>>> cx_to_binvecs = cfsn_vecs.binarize_ovr()
>>> print('cx_to_binvecs = {!r}'.format(cx_to_binvecs))
classmethod from_arrays(ConfusionVectors, true, pred=None, score=None, weight=None, probs=None, classes=None)

Construct confusion vector data structure from component arrays

Example

>>> # xdoctest: +REQUIRES(module:pandas)
>>> import kwarray
>>> classes = ['person', 'vehicle', 'object']
>>> rng = kwarray.ensure_rng(0)
>>> true = (rng.rand(10) * len(classes)).astype(int)
>>> probs = rng.rand(len(true), len(classes))
>>> cfsn_vecs = ConfusionVectors.from_arrays(true=true, probs=probs, classes=classes)
>>> cfsn_vecs.confusion_matrix()
pred     person  vehicle  object
real
person        0        0       0
vehicle       2        4       1
object        2        1       0
confusion_matrix(cfsn_vecs, compress=False)

Builds a confusion matrix from the confusion vectors.

Parameters

compress (bool, default=False) – if True removes rows / columns with no entries

Returns

cmthe labeled confusion matrix
(Note: we should write a efficient replacement for

this use case. #remove_pandas)

Return type

pd.DataFrame

CommandLine

xdoctest -m kwcoco.metrics.confusion_vectors ConfusionVectors.confusion_matrix

Example

>>> # xdoctest: +REQUIRES(module:pandas)
>>> from kwcoco.metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     nimgs=10, nboxes=(0, 10), n_fp=(0, 1), n_fn=(0, 1), classes=3, cls_noise=.2)
>>> cfsn_vecs = dmet.confusion_vectors()
>>> cm = cfsn_vecs.confusion_matrix()
...
>>> print(cm.to_string(float_format=lambda x: '%.2f' % x))
pred        background  cat_1  cat_2  cat_3
real
background        0.00   1.00   2.00   3.00
cat_1             3.00  12.00   0.00   0.00
cat_2             3.00   0.00  14.00   0.00
cat_3             2.00   0.00   0.00  17.00
coarsen(cfsn_vecs, cxs)

Creates a coarsened set of vectors

Returns

ConfusionVectors

binarize_classless(cfsn_vecs, negative_classes=None)

Creates a binary representation useful for measuring the performance of detectors. It is assumed that scores of “positive” classes should be high and “negative” clases should be low.

Parameters

negative_classes (List[str | int]) – list of negative class names or idxs, by default chooses any class with a true class index of -1. These classes should ideally have low scores.

Returns

BinaryConfusionVectors

Note

The “classlessness” of this depends on the compat=”all” argument being used when constructing confusion vectors, otherwise it becomes something like a macro-average because the class information was used in deciding which true and predicted boxes were allowed to match.

Example

>>> from kwcoco.metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     nimgs=10, nboxes=(0, 10), n_fp=(0, 1), n_fn=(0, 1), classes=3)
>>> cfsn_vecs = dmet.confusion_vectors()
>>> class_idxs = list(dmet.classes.node_to_idx.values())
>>> binvecs = cfsn_vecs.binarize_classless()
binarize_ovr(cfsn_vecs, mode=1, keyby='name', ignore_classes={'ignore'}, approx=0)

Transforms cfsn_vecs into one-vs-rest BinaryConfusionVectors for each category.

Parameters
  • mode (int, default=1) – 0 for heirarchy aware or 1 for voc like. MODE 0 IS PROBABLY BROKEN

  • keyby (int | str) – can be cx or name

  • ignore_classes (Set[str]) – category names to ignore

  • approx (bool, default=0) – if True try and approximate missing scores otherwise assume they are irrecoverable and use -inf

Returns

which behaves like

Dict[int, BinaryConfusionVectors]: cx_to_binvecs

Return type

OneVsRestConfusionVectors

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> cfsn_vecs = ConfusionVectors.demo()
>>> print('cfsn_vecs = {!r}'.format(cfsn_vecs))
>>> catname_to_binvecs = cfsn_vecs.binarize_ovr(keyby='name')
>>> print('catname_to_binvecs = {!r}'.format(catname_to_binvecs))

cfsn_vecs.data.pandas() catname_to_binvecs.cx_to_binvecs[‘class_1’].data.pandas()

Note

classification_report(cfsn_vecs, verbose=0)

Build a classification report with various metrics.

Example

>>> # xdoctest: +REQUIRES(module:pandas)
>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> cfsn_vecs = ConfusionVectors.demo()
>>> report = cfsn_vecs.classification_report(verbose=1)
class kwcoco.metrics.Measures(info)[source]

Bases: ubelt.NiceRepr, kwcoco.metrics.util.DictProxy

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> binvecs = BinaryConfusionVectors.demo(n=100, p_error=0.5)
>>> self = binvecs.measures()
>>> print('self = {!r}'.format(self))
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> self.draw(doclf=True)
>>> self.draw(key='pr',  pnum=(1, 2, 1))
>>> self.draw(key='roc', pnum=(1, 2, 2))
>>> kwplot.show_if_requested()
property catname(self)
__nice__(self)
reconstruct(self)
classmethod from_json(cls, state)
__json__(self)

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> binvecs = BinaryConfusionVectors.demo(n=10, p_error=0.5)
>>> self = binvecs.measures()
>>> info = self.__json__()
>>> print('info = {}'.format(ub.repr2(info, nl=1)))
>>> populate_info(info)
>>> print('info = {}'.format(ub.repr2(info, nl=1)))
>>> recon = Measures.from_json(info)
summary(self)
draw(self, key=None, prefix='', **kw)

Example

>>> # xdoctest: +REQUIRES(module:kwplot)
>>> # xdoctest: +REQUIRES(module:pandas)
>>> cfsn_vecs = ConfusionVectors.demo()
>>> ovr_cfsn = cfsn_vecs.binarize_ovr(keyby='name')
>>> self = ovr_cfsn.measures()['perclass']
>>> self.draw('mcc', doclf=True, fnum=1)
>>> self.draw('pr', doclf=1, fnum=2)
>>> self.draw('roc', doclf=1, fnum=3)
summary_plot(self, fnum=1, title='', subplots='auto')

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> cfsn_vecs = ConfusionVectors.demo(n=3, p_error=0.5)
>>> binvecs = cfsn_vecs.binarize_classless()
>>> self = binvecs.measures()
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> self.summary_plot()
>>> kwplot.show_if_requested()
classmethod combine(cls, tocombine, precision=None)

Combine binary confusion metrics

Parameters
  • tocombine (List[Measures]) – a list of measures to combine into one

  • precision (float | None) – If specified rounds thresholds to this precision which can prevent a RAM explosion when combining a large number of measures.

Returns

Measures

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> from kwcoco.metrics.confusion_vectors import BinaryConfusionVectors
>>> measures1 = BinaryConfusionVectors.demo(n=15).measures()
>>> measures2 = measures1  # BinaryConfusionVectors.demo(n=15).measures()
>>> tocombine = [measures1, measures2]
>>> new_measures = Measures.combine(tocombine)
>>> new_measures.reconstruct()
>>> print('new_measures = {!r}'.format(new_measures))
>>> print('measures1 = {!r}'.format(measures1))
>>> print('measures2 = {!r}'.format(measures2))
>>> print(ub.repr2(measures1.__json__(), nl=1, sort=0))
>>> print(ub.repr2(measures2.__json__(), nl=1, sort=0))
>>> print(ub.repr2(new_measures.__json__(), nl=1, sort=0))
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.figure(fnum=1)
>>> new_measures.summary_plot()
>>> measures1.summary_plot()
>>> measures1.draw('roc')
>>> measures2.draw('roc')
>>> new_measures.draw('roc')
class kwcoco.metrics.OneVsRestConfusionVectors(cx_to_binvecs, classes)[source]

Bases: ubelt.NiceRepr

Container for multiple one-vs-rest binary confusion vectors

Variables
  • cx_to_binvecs

  • classes

Example

>>> from kwcoco.metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     nimgs=10, nboxes=(0, 10), n_fp=(0, 1), classes=3)
>>> cfsn_vecs = dmet.confusion_vectors()
>>> self = cfsn_vecs.binarize_ovr(keyby='name')
>>> print('self = {!r}'.format(self))
__nice__(self)
classmethod demo(cls)
Parameters

**kwargs – See kwcoco.metrics.DetectionMetrics.demo()

Returns

ConfusionVectors

keys(self)
__getitem__(self, cx)
measures(self, stabalize_thresh=7, fp_cutoff=None, monotonic_ppv=True, ap_method='pycocotools')

Creates binary confusion measures for every one-versus-rest category.

Parameters
  • stabalize_thresh (int, default=7) – if fewer than this many data points inserts dummy stabalization data so curves can still be drawn.

  • fp_cutoff (int, default=None) – maximum number of false positives in the truncated roc curves. None is equivalent to float('inf')

  • monotonic_ppv (bool, default=True) – if True ensures that precision is always increasing as recall decreases. This is done in pycocotools scoring, but I’m not sure its a good idea.

SeeAlso:

BinaryConfusionVectors.measures()

Example

>>> self = OneVsRestConfusionVectors.demo()
>>> thresh_result = self.measures()['perclass']
abstract ovr_classification_report(self)
class kwcoco.metrics.PerClass_Measures(cx_to_info)[source]

Bases: ubelt.NiceRepr, kwcoco.metrics.util.DictProxy

__nice__(self)
summary(self)
classmethod from_json(cls, state)
__json__(self)
draw(self, key='mcc', prefix='', **kw)

Example

>>> # xdoctest: +REQUIRES(module:kwplot)
>>> cfsn_vecs = ConfusionVectors.demo()
>>> ovr_cfsn = cfsn_vecs.binarize_ovr(keyby='name')
>>> self = ovr_cfsn.measures()['perclass']
>>> self.draw('mcc', doclf=True, fnum=1)
>>> self.draw('pr', doclf=1, fnum=2)
>>> self.draw('roc', doclf=1, fnum=3)
draw_roc(self, prefix='', **kw)
draw_pr(self, prefix='', **kw)
summary_plot(self, fnum=1, title='', subplots='auto')

CommandLine

python ~/code/kwcoco/kwcoco/metrics/confusion_vectors.py PerClass_Measures.summary_plot --show

Example

>>> from kwcoco.metrics.confusion_vectors import *  # NOQA
>>> from kwcoco.metrics.detect_metrics import DetectionMetrics
>>> dmet = DetectionMetrics.demo(
>>>     n_fp=(0, 1), n_fn=(0, 3), nimgs=32, nboxes=(0, 32),
>>>     classes=3, rng=0, newstyle=1, box_noise=0.7, cls_noise=0.2, score_noise=0.3, with_probs=False)
>>> cfsn_vecs = dmet.confusion_vectors()
>>> ovr_cfsn = cfsn_vecs.binarize_ovr(keyby='name', ignore_classes=['vector', 'raster'])
>>> self = ovr_cfsn.measures()['perclass']
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> import seaborn as sns
>>> sns.set()
>>> self.summary_plot(title='demo summary_plot ovr', subplots=['pr', 'roc'])
>>> kwplot.show_if_requested()
>>> self.summary_plot(title='demo summary_plot ovr', subplots=['mcc', 'acc'], fnum=2)