:py:mod:`kwcoco.metrics.voc_metrics` ==================================== .. py:module:: kwcoco.metrics.voc_metrics Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: kwcoco.metrics.voc_metrics.VOC_Metrics Functions ~~~~~~~~~ .. autoapisummary:: kwcoco.metrics.voc_metrics._pr_curves kwcoco.metrics.voc_metrics._voc_eval kwcoco.metrics.voc_metrics._voc_ave_precision .. py:class:: VOC_Metrics(classes=None) Bases: :py:obj:`ubelt.NiceRepr` API to compute object detection scores using Pascal VOC evaluation method. To use, add true and predicted detections for each image and then run the :func:`VOC_Metrics.score` function. :ivar recs: true boxes for each image. maps image ids to a list of records within that image. Each record is a tlbr bbox, a difficult flag, and a class name. :vartype recs: Dict[int, List[dict] :ivar cx_to_lines: VOC formatted prediction preditions. mapping from class index to all predictions for that category. Each "line" is a list of [ [, , , , , ]]. :vartype cx_to_lines: Dict[int, List] .. py:method:: __nice__(self) .. py:method:: add_truth(self, true_dets, gid) .. py:method:: add_predictions(self, pred_dets, gid) .. py:method:: score(self, iou_thresh=0.5, bias=1, method='voc2012') Compute VOC scores for every category .. rubric:: Example >>> from kwcoco.metrics.detect_metrics import DetectionMetrics >>> from kwcoco.metrics.voc_metrics import * # NOQA >>> dmet = DetectionMetrics.demo( >>> nimgs=1, nboxes=(0, 100), n_fp=(0, 30), n_fn=(0, 30), classes=2, score_noise=0.9) >>> self = VOC_Metrics(classes=dmet.classes) >>> self.add_truth(dmet.true_detections(0), 0) >>> self.add_predictions(dmet.pred_detections(0), 0) >>> voc_scores = self.score() >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> kwplot.figure(fnum=1, doclf=True) >>> voc_scores['perclass'].draw() kwplot.figure(fnum=2) dmet.true_detections(0).draw(color='green', labels=None) dmet.pred_detections(0).draw(color='blue', labels=None) kwplot.autoplt().gca().set_xlim(0, 100) kwplot.autoplt().gca().set_ylim(0, 100) .. py:function:: _pr_curves(y, method='voc2012') Compute a PR curve from a method :Parameters: **y** (*pd.DataFrame | DataFrameArray*) -- output of detection_confusions :returns: Tuple[float, ndarray, ndarray] .. rubric:: Example >>> import pandas as pd >>> y1 = pd.DataFrame.from_records([ >>> {'pred': 0, 'score': 10.00, 'true': -1, 'weight': 1.00}, >>> {'pred': 0, 'score': 1.65, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 8.64, 'true': -1, 'weight': 1.00}, >>> {'pred': 0, 'score': 3.97, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 1.68, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 5.06, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 0.25, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 1.75, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 8.52, 'true': 0, 'weight': 1.00}, >>> {'pred': 0, 'score': 5.20, 'true': 0, 'weight': 1.00}, >>> ]) >>> import kwarray >>> y2 = kwarray.DataFrameArray(y1) >>> _pr_curves(y2) >>> _pr_curves(y1) .. py:function:: _voc_eval(lines, recs, classname, iou_thresh=0.5, method='voc2012', bias=1.0) VOC AP evaluation for a single category. :Parameters: * **lines** (*List[list]*) -- VOC formatted predictions. Each "line" is a list of [[, , , , , ]]. * **recs** (*Dict[int, List[dict]*) -- true boxes for each image. maps image ids to a list of records within that image. Each record is a tlbr bbox, a difficult flag, and a class name. * **classname** (*str*) -- the category to evaluate. * **method** (*str*) -- code for how the AP is computed. * **bias** (*float*) -- either 1.0 or 0.0. :returns: info about the evaluation containing AP. Contains fp, tp, prec, rec, :rtype: Dict .. note:: Raw replication of matlab implementation of creating assignments and the resulting PR-curves and AP. Based on MATLAB code [1]. .. rubric:: References [1] http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar .. py:function:: _voc_ave_precision(rec, prec, method='voc2012') Compute AP from precision and recall Based on MATLAB code in [1]_, [2]_, and [3]_. :Parameters: * **rec** (*ndarray*) -- recall * **prec** (*ndarray*) -- precision * **method** (*str*) -- either voc2012 or voc2007 :returns: ap: average precision :rtype: float .. rubric:: References .. [1] http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar .. [2] https://github.com/rbgirshick/voc-dpm/blob/master/test/pascal_eval.m .. [3] https://github.com/rbgirshick/voc-dpm/blob/c0b88564bd668bcc6216bbffe96cb061613be768/utils/bootstrap/VOCevaldet_bootstrap.m