kwcoco.coco_evaluator module¶
Evaluates a predicted coco dataset against a truth coco dataset.
The components in this module work programatically or as a command line script.
-
class
kwcoco.coco_evaluator.
CocoEvalConfig
(data=None, default=None, cmdline=False)[source]¶ Bases:
scriptconfig.config.Config
Evaluate and score predicted versus truth detections / classifications in a COCO dataset
-
default
= {'classes_of_interest': <Value(<class 'list'>: None)>, 'draw': <Value(None: True)>, 'expt_title': <Value(<class 'str'>: '')>, 'fp_cutoff': <Value(None: inf)>, 'ignore_classes': <Value(<class 'list'>: None)>, 'implicit_ignore_classes': <Value(None: ['ignore'])>, 'implicit_negative_classes': <Value(None: ['background'])>, 'out_dpath': <Value(<class 'str'>: './coco_metrics')>, 'pred_dataset': <Value(<class 'str'>: None)>, 'true_dataset': <Value(<class 'str'>: None)>, 'use_image_names': <Value(None: False)>}¶
-
-
class
kwcoco.coco_evaluator.
CocoEvaluator
(config)[source]¶ Bases:
object
Abstracts the evaluation process to execute on two coco datasets.
This can be run as a standalone script where the user specifies the paths to the true and predited dataset explicitly, or this can be used by a higher level script that produces the predictions and then sends them to this evaluator.
- Ignore:
>>> pred_fpath1 = ub.expandpath("$HOME/remote/viame/work/bioharn/fit/nice/bioharn-det-mc-cascade-rgb-fine-coi-v43/eval/may_priority_habcam_cfarm_v7_test.mscoc/bioharn-det-mc-cascade-rgb-fine-coi-v43__epoch_00000007/c=0.1,i=window,n=0.8,window_d=512,512,window_o=0.0/all_pred.mscoco.json") >>> pred_fpath2 = ub.expandpath('$HOME/tmp/cached_clf_out_cli/reclassified.mscoco.json') >>> true_fpath = ub.expandpath('$HOME/remote/namek/data/noaa_habcam/combos/habcam_cfarm_v8_test.mscoco.json') >>> config = { >>> 'true_dataset': true_fpath, >>> 'pred_dataset': pred_fpath2, >>> 'out_dpath': ub.expandpath('$HOME/remote/namek/tmp/reclassified_eval'), >>> 'classes_of_interest': [], >>> } >>> coco_eval = CocoEvaluator(config) >>> config = coco_eval.config >>> coco_eval._init() >>> coco_eval.evaluate()
Example
>>> from kwcoco.coco_evaluator import CocoEvaluator >>> import kwcoco >>> dpath = ub.ensure_app_cache_dir('kwcoco/tests/test_out_dpath') >>> true_dset = kwcoco.CocoDataset.demo('shapes8') >>> from kwcoco.demo.perterb import perterb_coco >>> kwargs = { >>> 'box_noise': 0.5, >>> 'n_fp': (0, 10), >>> 'n_fn': (0, 10), >>> 'with_probs': True, >>> } >>> pred_dset = perterb_coco(true_dset, **kwargs) >>> config = { >>> 'true_dataset': true_dset, >>> 'pred_dataset': pred_dset, >>> 'out_dpath': dpath, >>> 'classes_of_interest': [], >>> } >>> coco_eval = CocoEvaluator(config) >>> results = coco_eval.evaluate()
-
evaluate
()[source]¶ Example
>>> from kwcoco.coco_evaluator import * # NOQA >>> from kwcoco.coco_evaluator import CocoEvaluator >>> import kwcoco >>> dpath = ub.ensure_app_cache_dir('kwcoco/tests/test_out_dpath') >>> true_dset = kwcoco.CocoDataset.demo('shapes8') >>> from kwcoco.demo.perterb import perterb_coco >>> kwargs = { >>> 'box_noise': 0.5, >>> 'n_fp': (0, 10), >>> 'n_fn': (0, 10), >>> 'with_probs': True, >>> } >>> pred_dset = perterb_coco(true_dset, **kwargs) >>> config = { >>> 'true_dataset': true_dset, >>> 'pred_dataset': pred_dset, >>> 'out_dpath': dpath, >>> } >>> coco_eval = CocoEvaluator(config) >>> results = coco_eval.evaluate()
-
class
kwcoco.coco_evaluator.
CocoResults
(measures, ovr_measures, cfsn_vecs, meta=None)[source]¶ Bases:
ubelt.util_mixins.NiceRepr
Container class to store, draw, summarize, and serialize results from CocoEvaluator.
-
class
kwcoco.coco_evaluator.
CocoEvalCLIConfig
(data=None, default=None, cmdline=False)[source]¶ Bases:
scriptconfig.config.Config
-
default
= {'classes_of_interest': <Value(<class 'list'>: None)>, 'draw': <Value(None: True)>, 'expt_title': <Value(<class 'str'>: '')>, 'fp_cutoff': <Value(None: inf)>, 'ignore_classes': <Value(<class 'list'>: None)>, 'implicit_ignore_classes': <Value(None: ['ignore'])>, 'implicit_negative_classes': <Value(None: ['background'])>, 'out_dpath': <Value(<class 'str'>: './coco_metrics')>, 'pred_dataset': <Value(<class 'str'>: None)>, 'true_dataset': <Value(<class 'str'>: None)>, 'use_image_names': <Value(None: False)>}¶
-