kwcoco.demo.toydata module¶
-
kwcoco.demo.toydata.
demodata_toy_img
(anchors=None, gsize=(104, 104), categories=None, n_annots=(0, 50), fg_scale=0.5, bg_scale=0.8, bg_intensity=0.1, fg_intensity=0.9, gray=True, centerobj=None, exact=False, newstyle=True, rng=None, aux=None)[source]¶ Generate a single image with non-overlapping toy objects of available categories.
Parameters: - anchors (ndarray) – Nx2 base width / height of boxes
- gsize (Tuple[int, int]) – width / height of the image
- categories (List[str]) – list of category names
- n_annots (Tuple | int) – controls how many annotations are in the image. if it is a tuple, then it is interpreted as uniform random bounds
- fg_scale (float) – standard deviation of foreground intensity
- bg_scale (float) – standard deviation of background intensity
- bg_intensity (float) – mean of background intensity
- fg_intensity (float) – mean of foreground intensity
- centerobj (bool) – if ‘pos’, then the first annotation will be in the center of the image, if ‘neg’, then no annotations will be in the center.
- exact (bool) – if True, ensures that exactly the number of specified annots are generated.
- newstyle (bool) – use new-sytle mscoco format
- rng (RandomState) – the random state used to seed the process
- aux – if specified builds auxillary channels
- CommandLine:
- xdoctest -m kwcoco.demo.toydata demodata_toy_img:0 –profile xdoctest -m kwcoco.demo.toydata demodata_toy_img:1 –show
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> img, anns = demodata_toy_img(gsize=(32, 32), anchors=[[.3, .3]], rng=0) >>> img['imdata'] = '<ndarray shape={}>'.format(img['imdata'].shape) >>> print('img = {}'.format(ub.repr2(img))) >>> print('anns = {}'.format(ub.repr2(anns, nl=2, cbr=True))) >>> # xdoctest: +IGNORE_WANT img = { 'height': 32, 'imdata': '<ndarray shape=(32, 32, 3)>', 'width': 32, } anns = [{'bbox': [15, 10, 9, 8], 'category_name': 'star', 'keypoints': [], 'segmentation': {'counts': '[`06j0000O20N1000e8', 'size': [32, 32]},}, {'bbox': [11, 20, 7, 7], 'category_name': 'star', 'keypoints': [], 'segmentation': {'counts': 'g;1m04N0O20N102L[=', 'size': [32, 32]},}, {'bbox': [4, 4, 8, 6], 'category_name': 'superstar', 'keypoints': [{'keypoint_category': 'left_eye', 'xy': [7.25, 6.8125]}, {'keypoint_category': 'right_eye', 'xy': [8.75, 6.8125]}], 'segmentation': {'counts': 'U4210j0300O01010O00MVO0ed0', 'size': [32, 32]},}, {'bbox': [3, 20, 6, 7], 'category_name': 'star', 'keypoints': [], 'segmentation': {'counts': 'g31m04N000002L[f0', 'size': [32, 32]},},]
Example
>>> # xdoctest: +REQUIRES(--show) >>> img, anns = demodata_toy_img(gsize=(172, 172), rng=None, aux=True) >>> print('anns = {}'.format(ub.repr2(anns, nl=1))) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(img['imdata'], pnum=(1, 2, 1), fnum=1) >>> auxdata = img['auxillary'][0]['imdata'] >>> kwplot.imshow(auxdata, pnum=(1, 2, 2), fnum=1) >>> kwplot.show_if_requested()
- Ignore:
- from kwcoco.demo.toydata import * import xinspect globals().update(xinspect.get_kwargs(demodata_toy_img))
-
kwcoco.demo.toydata.
demodata_toy_dset
(gsize=(600, 600), n_imgs=5, verbose=3, rng=0, newstyle=True, dpath=None, aux=None, cache=True)[source]¶ Create a toy detection problem
Parameters: - gsize (Tuple) – size of the images
- n_img (int) – number of images to generate
- rng (int | RandomState) – random number generator or seed
- newstyle (bool, default=True) – create newstyle mscoco data
- dpath (str) – path to the output image directory, defaults to using kwcoco cache dir
Returns: dataset in mscoco format
Return type: - SeeAlso:
- random_video_dset
- CommandLine:
- xdoctest -m kwcoco.demo.toydata demodata_toy_dset –show
- Ignore:
- import xdev globals().update(xdev.get_func_kwargs(demodata_toy_dset))
Todo
- [ ] Non-homogeneous images sizes
Example
>>> from kwcoco.demo.toydata import * >>> import kwcoco >>> dataset = demodata_toy_dset(gsize=(300, 300), aux=True, cache=False) >>> dpath = ub.ensure_app_cache_dir('kwcoco', 'toy_dset') >>> dset = kwcoco.CocoDataset(dataset) >>> # xdoctest: +REQUIRES(--show) >>> print(ub.repr2(dset.dataset, nl=2)) >>> import kwplot >>> kwplot.autompl() >>> dset.show_image(gid=1) >>> ub.startfile(dpath)
-
kwcoco.demo.toydata.
random_video_dset
(num_videos=1, num_frames=2, num_tracks=2, anchors=None, gsize=(600, 600), verbose=3, render=False, rng=None)[source]¶ Create a toy Coco Video Dataset
Parameters: - num_videos – number of videos
- num_frames – number of images per video
- num_tracks – number of tracks per video
- gsize – image size
- render (bool | dict) – if truthy the toy annotations are synthetically
rendered. See
render_toy_image
for details. - rng (int | None | RandomState) – random seed / state
- SeeAlso:
- random_single_video_dset
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> dset = random_video_dset(render=True, num_videos=3, num_frames=2, num_tracks=10) >>> # xdoctest: +REQUIRES(--show) >>> dset.show_image(1, doclf=True) >>> dset.show_image(2, doclf=True)
import xdev globals().update(xdev.get_func_kwargs(random_video_dset)) num_videos = 2
-
kwcoco.demo.toydata.
random_single_video_dset
(gsize=(600, 600), num_frames=5, num_tracks=3, tid_start=1, gid_start=1, video_id=1, anchors=None, rng=None, render=False, autobuild=True, verbose=3)[source]¶ Create the video scene layout of object positions.
Parameters: - gsize (Tuple[int, int]) – size of the images
- num_frames (int) – number of frames in this video
- num_tracks (int) – number of tracks in this video
- tid_start (int, default=1) – track-id start index
- gid_start (int, default=1) – image-id start index
- video_id (int, default=1) – video-id of this video
- anchors (ndarray | None) – base anchor sizes of the object boxes we will generate.
- rng (RandomState) – random state / seed
- render (bool | dict) – if truthy, does the rendering according to provided params in the case of dict input.
- autobuild (bool, default=True) – prebuild coco lookup indexes
- verbose (int) – verbosity level
Todo
- [ ] Need maximum allowed object overlap measure
- [ ] Need better parameterized path generation
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> anchors = np.array([ [0.3, 0.3], [0.1, 0.1]]) >>> dset = random_single_video_dset(render=True, num_frames=10, num_tracks=10, anchors=anchors) >>> # xdoctest: +REQUIRES(--show) >>> # Show the tracks in a single image >>> import kwplot >>> kwplot.autompl() >>> annots = dset.annots() >>> tids = annots.lookup('track_id') >>> tid_to_aids = ub.group_items(annots.aids, tids) >>> paths = [] >>> track_boxes = [] >>> for tid, aids in tid_to_aids.items(): >>> boxes = dset.annots(aids).boxes.to_cxywh() >>> path = boxes.data[:, 0:2] >>> paths.append(path) >>> track_boxes.append(boxes) >>> import kwplot >>> plt = kwplot.autoplt() >>> ax = plt.gca() >>> ax.cla() >>> # >>> import kwimage >>> colors = kwimage.Color.distinct(len(track_boxes)) >>> for i, boxes in enumerate(track_boxes): >>> color = colors[i] >>> path = boxes.data[:, 0:2] >>> boxes.draw(color=color, centers={'radius': 0.01}, alpha=0.5) >>> ax.plot(path.T[0], path.T[1], 'x-', color=color)
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> anchors = np.array([ [0.2, 0.2], [0.1, 0.1]]) >>> gsize = np.array([(600, 600)]) >>> print(anchors * gsize) >>> dset = random_single_video_dset(render=True, num_frames=10, anchors=anchors, num_tracks=10) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> plt = kwplot.autoplt() >>> plt.clf() >>> gids = list(dset.imgs.keys()) >>> pnums = kwplot.PlotNums(nSubplots=len(gids), nRows=1) >>> for gid in gids: >>> dset.show_image(gid, pnum=pnums(), fnum=1, title=False) >>> pnums = kwplot.PlotNums(nSubplots=len(gids))
-
kwcoco.demo.toydata.
render_toy_dataset
(dset, rng, dpath=None, renderkw=None)[source]¶ Create toydata renderings for a preconstructed coco dataset.
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> import kwarray >>> rng = None >>> rng = kwarray.ensure_rng(rng) >>> num_tracks = 3 >>> dset = random_video_dset(rng=rng, num_videos=3, num_frames=10, num_tracks=3) >>> dset = render_toy_dataset(dset, rng) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> plt = kwplot.autoplt() >>> plt.clf() >>> gids = list(dset.imgs.keys()) >>> pnums = kwplot.PlotNums(nSubplots=len(gids), nRows=num_tracks) >>> for gid in gids: >>> dset.show_image(gid, pnum=pnums(), fnum=1, title=False) >>> pnums = kwplot.PlotNums(nSubplots=len(gids)) >>> # >>> # for gid in gids: >>> # canvas = dset.draw_image(gid) >>> # kwplot.imshow(canvas, pnum=pnums(), fnum=2)
-
kwcoco.demo.toydata.
render_toy_image
(dset, gid, rng=None, renderkw=None)[source]¶ Modifies dataset inplace, rendering synthetic annotations
Parameters: - dset (CocoDataset) – coco dataset with renderable anotations / images
- gid (int) – image to render
- rng (int | None | RandomState) – random state
- renderkw (dict) – rendering config gray (boo): gray or color images fg_scale (float): foreground noisyness (gauss std) bg_scale (float): background noisyness (gauss std) fg_intensity (float): foreground brightness (gauss mean) bg_intensity (float): background brightness (gauss mean) newstyle (bool): use new kwcoco datastructure formats with_kpts (bool): include keypoint info with_sseg (bool): include segmentation info
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> gsize=(600, 600) >>> num_frames=5 >>> verbose=3 >>> rng = None >>> import kwarray >>> rng = kwarray.ensure_rng(rng) >>> dset = random_video_dset( >>> gsize=gsize, num_frames=num_frames, verbose=verbose, rng=rng, num_videos=2) >>> print('dset.dataset = {}'.format(ub.repr2(dset.dataset, nl=2))) >>> gid = 1 >>> renderkw = dict( ... gray=0, ... ) >>> render_toy_image(dset, gid, rng, renderkw=renderkw) >>> gid = 1 >>> canvas = dset.imgs[gid]['imdata'] >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.imshow(canvas, doclf=True) >>> dets = dset.annots(gid=gid).detections >>> dets.draw()
-
kwcoco.demo.toydata.
random_multi_object_path
(num_objects, num_frames, rng=None)[source]¶ num_objects = 30 num_frames = 30
from kwcoco.demo.toydata import * # NOQA paths = random_multi_object_path(num_objects, num_frames, rng)
import kwplot plt = kwplot.autoplt() ax = plt.gca() ax.cla() ax.set_xlim(-.01, 1.01) ax.set_ylim(-.01, 1.01)
rng = None
- for path in paths:
- ax.plot(path.T[0], path.T[1], ‘x-‘)
-
kwcoco.demo.toydata.
random_path
(num, degree=1, dimension=2, rng=None, mode='walk')[source]¶ Create a random path using a bezier curve.
Parameters: - num (int) – number of points in the path
- degree (int, default=1) – degree of curvieness of the path
- dimension (int, default=2) – number of spatial dimensions
- rng (RandomState, default=None) – seed
References
https://github.com/dhermes/bezier
Example
>>> from kwcoco.demo.toydata import * # NOQA >>> num = 10 >>> dimension = 2 >>> degree = 3 >>> rng = None >>> path = random_path(num, degree, dimension, rng, mode='walk') >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> plt = kwplot.autoplt() >>> kwplot.multi_plot(xdata=path[:, 0], ydata=path[:, 1], fnum=1, doclf=1, xlim=(0, 1), ylim=(0, 1)) >>> kwplot.show_if_requested()