:py:mod:`kwcoco.demo.toydata` ============================= .. py:module:: kwcoco.demo.toydata .. autoapi-nested-parse:: Generates "toydata" for demo and testing purposes. .. note:: The implementation of `demodata_toy_img` and `demodata_toy_dset` should be redone using the tools built for `random_video_dset`, which have more extensible implementations. Module Contents --------------- Functions ~~~~~~~~~ .. autoapisummary:: kwcoco.demo.toydata.demodata_toy_dset kwcoco.demo.toydata.random_video_dset kwcoco.demo.toydata.random_single_video_dset kwcoco.demo.toydata.demodata_toy_img kwcoco.demo.toydata.render_toy_dataset kwcoco.demo.toydata.render_toy_image kwcoco.demo.toydata.false_color kwcoco.demo.toydata.render_background kwcoco.demo.toydata.random_multi_object_path kwcoco.demo.toydata.random_path Attributes ~~~~~~~~~~ .. autoapisummary:: kwcoco.demo.toydata.profile kwcoco.demo.toydata.TOYDATA_VERSION .. py:data:: profile .. py:data:: TOYDATA_VERSION :annotation: = 16 .. py:function:: demodata_toy_dset(image_size=(600, 600), n_imgs=5, verbose=3, rng=0, newstyle=True, dpath=None, bundle_dpath=None, aux=None, use_cache=True, **kwargs) Create a toy detection problem :Parameters: * **image_size** (*Tuple[int, int]*) -- The width and height of the generated images * **n_imgs** (*int*) -- number of images to generate * **rng** (*int | RandomState, default=0*) -- random number generator or seed * **newstyle** (*bool, default=True*) -- create newstyle kwcoco data * **dpath** (*str*) -- path to the directory that will contain the bundle, (defaults to a kwcoco cache dir). Ignored if `bundle_dpath` is given. * **bundle_dpath** (*str*) -- path to the directory that will store images. If specified, dpath is ignored. If unspecified, a bundle will be written inside `dpath`. * **aux** (*bool*) -- if True generates dummy auxiliary channels * **verbose** (*int, default=3*) -- verbosity mode * **use_cache** (*bool, default=True*) -- if True caches the generated json in the `dpath`. * **\*\*kwargs** -- used for old backwards compatible argument names gsize - alias for image_size :returns: :rtype: kwcoco.CocoDataset SeeAlso: random_video_dset .. rubric:: CommandLine .. code-block:: bash xdoctest -m kwcoco.demo.toydata demodata_toy_dset --show .. todo:: - [ ] Non-homogeneous images sizes .. rubric:: Example >>> from kwcoco.demo.toydata import * >>> import kwcoco >>> dset = demodata_toy_dset(image_size=(300, 300), aux=True, use_cache=False) >>> # xdoctest: +REQUIRES(--show) >>> print(ub.repr2(dset.dataset, nl=2)) >>> import kwplot >>> kwplot.autompl() >>> dset.show_image(gid=1) >>> ub.startfile(dset.bundle_dpath) dset._tree() >>> from kwcoco.demo.toydata import * >>> import kwcoco dset = demodata_toy_dset(image_size=(300, 300), aux=True, use_cache=False) print(dset.imgs[1]) dset._tree() dset = demodata_toy_dset(image_size=(300, 300), aux=True, use_cache=False, bundle_dpath='test_bundle') print(dset.imgs[1]) dset._tree() dset = demodata_toy_dset( image_size=(300, 300), aux=True, use_cache=False, dpath='test_cache_dpath') .. py:function:: random_video_dset(num_videos=1, num_frames=2, num_tracks=2, anchors=None, image_size=(600, 600), verbose=3, render=False, aux=None, multispectral=False, rng=None, dpath=None, max_speed=0.01, **kwargs) Create a toy Coco Video Dataset :Parameters: * **num_videos** (*int*) -- number of videos * **num_frames** (*int*) -- number of images per video * **num_tracks** (*int*) -- number of tracks per video * **image_size** (*Tuple[int, int]*) -- The width and height of the generated images * **render** (*bool | dict*) -- if truthy the toy annotations are synthetically rendered. See :func:`render_toy_image` for details. * **rng** (*int | None | RandomState*) -- random seed / state * **dpath** (*str*) -- only used if render is truthy, place to write rendered images. * **verbose** (*int, default=3*) -- verbosity mode * **aux** (*bool*) -- if True generates dummy auxiliary channels * **multispectral** (*bool*) -- similar to aux, but does not have the concept of a "main" image. * **max_speed** (*float*) -- max speed of movers * **\*\*kwargs** -- used for old backwards compatible argument names gsize - alias for image_size SeeAlso: random_single_video_dset .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> dset = random_video_dset(render=True, num_videos=3, num_frames=2, >>> num_tracks=10) >>> # xdoctest: +REQUIRES(--show) >>> dset.show_image(1, doclf=True) >>> dset.show_image(2, doclf=True) >>> from kwcoco.demo.toydata import * # NOQA dset = random_video_dset(render=False, num_videos=3, num_frames=2, num_tracks=10) dset._tree() dset.imgs[1] dset = random_single_video_dset() dset._tree() dset.imgs[1] from kwcoco.demo.toydata import * # NOQA dset = random_video_dset(render=True, num_videos=3, num_frames=2, num_tracks=10) print(dset.imgs[1]) print('dset.bundle_dpath = {!r}'.format(dset.bundle_dpath)) dset._tree() import xdev globals().update(xdev.get_func_kwargs(random_video_dset)) num_videos = 2 .. py:function:: random_single_video_dset(image_size=(600, 600), num_frames=5, num_tracks=3, tid_start=1, gid_start=1, video_id=1, anchors=None, rng=None, render=False, dpath=None, autobuild=True, verbose=3, aux=None, multispectral=False, max_speed=0.01, **kwargs) Create the video scene layout of object positions. .. note:: Does not render the data unless specified. :Parameters: * **image_size** (*Tuple[int, int]*) -- size of the images * **num_frames** (*int*) -- number of frames in this video * **num_tracks** (*int*) -- number of tracks in this video * **tid_start** (*int, default=1*) -- track-id start index * **gid_start** (*int, default=1*) -- image-id start index * **video_id** (*int, default=1*) -- video-id of this video * **anchors** (*ndarray | None*) -- base anchor sizes of the object boxes we will generate. * **rng** (*RandomState*) -- random state / seed * **render** (*bool | dict*) -- if truthy, does the rendering according to provided params in the case of dict input. * **autobuild** (*bool, default=True*) -- prebuild coco lookup indexes * **verbose** (*int*) -- verbosity level * **aux** (*bool | List[str]*) -- if specified generates auxiliary channels * **multispectral** (*bool*) -- if specified simulates multispectral imagry This is similar to aux, but has no "main" file. * **max_speed** (*float*) -- max speed of movers * **\*\*kwargs** -- used for old backwards compatible argument names gsize - alias for image_size .. todo:: - [ ] Need maximum allowed object overlap measure - [ ] Need better parameterized path generation .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> anchors = np.array([ [0.3, 0.3], [0.1, 0.1]]) >>> dset = random_single_video_dset(render=True, num_frames=10, num_tracks=10, anchors=anchors, max_speed=0.2) >>> # xdoctest: +REQUIRES(--show) >>> # Show the tracks in a single image >>> import kwplot >>> kwplot.autompl() >>> annots = dset.annots() >>> tids = annots.lookup('track_id') >>> tid_to_aids = ub.group_items(annots.aids, tids) >>> paths = [] >>> track_boxes = [] >>> for tid, aids in tid_to_aids.items(): >>> boxes = dset.annots(aids).boxes.to_cxywh() >>> path = boxes.data[:, 0:2] >>> paths.append(path) >>> track_boxes.append(boxes) >>> import kwplot >>> plt = kwplot.autoplt() >>> ax = plt.gca() >>> ax.cla() >>> # >>> import kwimage >>> colors = kwimage.Color.distinct(len(track_boxes)) >>> for i, boxes in enumerate(track_boxes): >>> color = colors[i] >>> path = boxes.data[:, 0:2] >>> boxes.draw(color=color, centers={'radius': 0.01}, alpha=0.5) >>> ax.plot(path.T[0], path.T[1], 'x-', color=color) .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> anchors = np.array([ [0.2, 0.2], [0.1, 0.1]]) >>> gsize = np.array([(600, 600)]) >>> print(anchors * gsize) >>> dset = random_single_video_dset(render=True, num_frames=10, >>> anchors=anchors, num_tracks=10) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> plt = kwplot.autoplt() >>> plt.clf() >>> gids = list(dset.imgs.keys()) >>> pnums = kwplot.PlotNums(nSubplots=len(gids), nRows=1) >>> for gid in gids: >>> dset.show_image(gid, pnum=pnums(), fnum=1, title=False) >>> pnums = kwplot.PlotNums(nSubplots=len(gids)) .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> dset = random_single_video_dset(num_frames=10, num_tracks=10, aux=True) >>> assert 'auxiliary' in dset.imgs[1] >>> assert dset.imgs[1]['auxiliary'][0]['channels'] >>> assert dset.imgs[1]['auxiliary'][1]['channels'] .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> multispectral = True >>> dset = random_single_video_dset(num_frames=1, num_tracks=1, multispectral=True) >>> dset._check_json_serializable() >>> dset.dataset['images'] >>> assert dset.imgs[1]['auxiliary'][1]['channels'] >>> # test that we can render >>> render_toy_dataset(dset, rng=0, dpath=None, renderkw={}) .. py:function:: demodata_toy_img(anchors=None, image_size=(104, 104), categories=None, n_annots=(0, 50), fg_scale=0.5, bg_scale=0.8, bg_intensity=0.1, fg_intensity=0.9, gray=True, centerobj=None, exact=False, newstyle=True, rng=None, aux=None, **kwargs) Generate a single image with non-overlapping toy objects of available categories. .. todo:: DEPRECATE IN FAVOR OF random_single_video_dset + render_toy_image :Parameters: * **anchors** (*ndarray*) -- Nx2 base width / height of boxes * **gsize** (*Tuple[int, int]*) -- width / height of the image * **categories** (*List[str]*) -- list of category names * **n_annots** (*Tuple | int*) -- controls how many annotations are in the image. if it is a tuple, then it is interpreted as uniform random bounds * **fg_scale** (*float*) -- standard deviation of foreground intensity * **bg_scale** (*float*) -- standard deviation of background intensity * **bg_intensity** (*float*) -- mean of background intensity * **fg_intensity** (*float*) -- mean of foreground intensity * **centerobj** (*bool*) -- if 'pos', then the first annotation will be in the center of the image, if 'neg', then no annotations will be in the center. * **exact** (*bool*) -- if True, ensures that exactly the number of specified annots are generated. * **newstyle** (*bool*) -- use new-sytle kwcoco format * **rng** (*RandomState*) -- the random state used to seed the process * **aux** -- if specified builds auxiliary channels * **\*\*kwargs** -- used for old backwards compatible argument names. gsize - alias for image_size .. rubric:: CommandLine .. code-block:: bash xdoctest -m kwcoco.demo.toydata demodata_toy_img:0 --profile xdoctest -m kwcoco.demo.toydata demodata_toy_img:1 --show .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> img, anns = demodata_toy_img(image_size=(32, 32), anchors=[[.3, .3]], rng=0) >>> img['imdata'] = ''.format(img['imdata'].shape) >>> print('img = {}'.format(ub.repr2(img))) >>> print('anns = {}'.format(ub.repr2(anns, nl=2, cbr=True))) >>> # xdoctest: +IGNORE_WANT img = { 'height': 32, 'imdata': '', 'width': 32, } anns = [{'bbox': [15, 10, 9, 8], 'category_name': 'star', 'keypoints': [], 'segmentation': {'counts': '[`06j0000O20N1000e8', 'size': [32, 32]},}, {'bbox': [11, 20, 7, 7], 'category_name': 'star', 'keypoints': [], 'segmentation': {'counts': 'g;1m04N0O20N102L[=', 'size': [32, 32]},}, {'bbox': [4, 4, 8, 6], 'category_name': 'superstar', 'keypoints': [{'keypoint_category': 'left_eye', 'xy': [7.25, 6.8125]}, {'keypoint_category': 'right_eye', 'xy': [8.75, 6.8125]}], 'segmentation': {'counts': 'U4210j0300O01010O00MVO0ed0', 'size': [32, 32]},}, {'bbox': [3, 20, 6, 7], 'category_name': 'star', 'keypoints': [], 'segmentation': {'counts': 'g31m04N000002L[f0', 'size': [32, 32]},},] .. rubric:: Example >>> # xdoctest: +REQUIRES(--show) >>> img, anns = demodata_toy_img(image_size=(172, 172), rng=None, aux=True) >>> print('anns = {}'.format(ub.repr2(anns, nl=1))) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(img['imdata'], pnum=(1, 2, 1), fnum=1) >>> auxdata = img['auxiliary'][0]['imdata'] >>> kwplot.imshow(auxdata, pnum=(1, 2, 2), fnum=1) >>> kwplot.show_if_requested() .. rubric:: Example >>> # xdoctest: +REQUIRES(--show) >>> img, anns = demodata_toy_img(image_size=(172, 172), rng=None, aux=True) >>> print('anns = {}'.format(ub.repr2(anns, nl=1))) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(img['imdata'], pnum=(1, 2, 1), fnum=1) >>> auxdata = img['auxiliary'][0]['imdata'] >>> kwplot.imshow(auxdata, pnum=(1, 2, 2), fnum=1) >>> kwplot.show_if_requested() .. py:function:: render_toy_dataset(dset, rng, dpath=None, renderkw=None) Create toydata renderings for a preconstructed coco dataset. :Parameters: * **dset** (*CocoDataset*) -- A dataset that contains special "renderable" annotations. (e.g. the demo shapes). Each image can contain special fields that influence how an image will be rendered. Currently this process is simple, it just creates a noisy image with the shapes superimposed over where they should exist as indicated by the annotations. In the future this may become more sophisticated. Each item in `dset.dataset['images']` will be modified to add the "file_name" field indicating where the rendered data is writen. * **rng** (*int | None | RandomState*) -- random state * **dpath** (*str*) -- The location to write the images to. If unspecified, it is written to the rendered folder inside the kwcoco cache directory. * **renderkw** (*dict*) -- See :func:`render_toy_image` for details. .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> import kwarray >>> rng = None >>> rng = kwarray.ensure_rng(rng) >>> num_tracks = 3 >>> dset = random_video_dset(rng=rng, num_videos=3, num_frames=10, num_tracks=3) >>> dset = render_toy_dataset(dset, rng) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> plt = kwplot.autoplt() >>> plt.clf() >>> gids = list(dset.imgs.keys()) >>> pnums = kwplot.PlotNums(nSubplots=len(gids), nRows=num_tracks) >>> for gid in gids: >>> dset.show_image(gid, pnum=pnums(), fnum=1, title=False) >>> pnums = kwplot.PlotNums(nSubplots=len(gids)) >>> # >>> # for gid in gids: >>> # canvas = dset.draw_image(gid) >>> # kwplot.imshow(canvas, pnum=pnums(), fnum=2) .. py:function:: render_toy_image(dset, gid, rng=None, renderkw=None) Modifies dataset inplace, rendering synthetic annotations. This does not write to disk. Instead this writes to placeholder values in the image dictionary. :Parameters: * **dset** (*CocoDataset*) -- coco dataset with renderable anotations / images * **gid** (*int*) -- image to render * **rng** (*int | None | RandomState*) -- random state * **renderkw** (*dict*) -- rendering config gray (boo): gray or color images fg_scale (float): foreground noisyness (gauss std) bg_scale (float): background noisyness (gauss std) fg_intensity (float): foreground brightness (gauss mean) bg_intensity (float): background brightness (gauss mean) newstyle (bool): use new kwcoco datastructure formats with_kpts (bool): include keypoint info with_sseg (bool): include segmentation info :returns: the inplace-modified image dictionary :rtype: Dict .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> image_size=(600, 600) >>> num_frames=5 >>> verbose=3 >>> rng = None >>> import kwarray >>> rng = kwarray.ensure_rng(rng) >>> aux = 'mx' >>> dset = random_single_video_dset( >>> image_size=image_size, num_frames=num_frames, verbose=verbose, aux=aux, rng=rng) >>> print('dset.dataset = {}'.format(ub.repr2(dset.dataset, nl=2))) >>> gid = 1 >>> renderkw = {} >>> render_toy_image(dset, gid, rng, renderkw=renderkw) >>> img = dset.imgs[gid] >>> canvas = img['imdata'] >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(canvas, doclf=True, pnum=(1, 2, 1)) >>> dets = dset.annots(gid=gid).detections >>> dets.draw() >>> auxdata = img['auxiliary'][0]['imdata'] >>> aux_canvas = false_color(auxdata) >>> kwplot.imshow(aux_canvas, pnum=(1, 2, 2)) >>> _ = dets.draw() >>> # xdoctest: +REQUIRES(--show) >>> img, anns = demodata_toy_img(image_size=(172, 172), rng=None, aux=True) >>> print('anns = {}'.format(ub.repr2(anns, nl=1))) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(img['imdata'], pnum=(1, 2, 1), fnum=1) >>> auxdata = img['auxiliary'][0]['imdata'] >>> kwplot.imshow(auxdata, pnum=(1, 2, 2), fnum=1) >>> kwplot.show_if_requested() .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> multispectral = True >>> dset = random_single_video_dset(num_frames=1, num_tracks=1, multispectral=True) >>> gid = 1 >>> dset.imgs[gid] >>> rng = kwarray.ensure_rng(0) >>> renderkw = {'with_sseg': True} >>> img = render_toy_image(dset, gid, rng=rng, renderkw=renderkw) .. py:function:: false_color(twochan) .. py:function:: render_background(img, rng, gray, bg_intensity, bg_scale) .. py:function:: random_multi_object_path(num_objects, num_frames, rng=None, max_speed=0.01) .. py:function:: random_path(num, degree=1, dimension=2, rng=None, mode='boid') Create a random path using a somem ethod curve. :Parameters: * **num** (*int*) -- number of points in the path * **degree** (*int, default=1*) -- degree of curvieness of the path * **dimension** (*int, default=2*) -- number of spatial dimensions * **mode** (*str*) -- can be boid, walk, or bezier * **rng** (*RandomState, default=None*) -- seed .. rubric:: References https://github.com/dhermes/bezier .. rubric:: Example >>> from kwcoco.demo.toydata import * # NOQA >>> num = 10 >>> dimension = 2 >>> degree = 3 >>> rng = None >>> path = random_path(num, degree, dimension, rng, mode='boid') >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> plt = kwplot.autoplt() >>> kwplot.multi_plot(xdata=path[:, 0], ydata=path[:, 1], fnum=1, doclf=1, xlim=(0, 1), ylim=(0, 1)) >>> kwplot.show_if_requested() .. rubric:: Example >>> # xdoctest: +REQUIRES(--3d) >>> # xdoctest: +REQUIRES(module:bezier) >>> import kwarray >>> import kwplot >>> plt = kwplot.autoplt() >>> # >>> num= num_frames = 100 >>> rng = kwarray.ensure_rng(0) >>> # >>> from kwcoco.demo.toydata import * # NOQA >>> paths = [] >>> paths.append(random_path(num, degree=3, dimension=3, mode='bezier')) >>> paths.append(random_path(num, degree=2, dimension=3, mode='bezier')) >>> paths.append(random_path(num, degree=4, dimension=3, mode='bezier')) >>> # >>> from mpl_toolkits.mplot3d import Axes3D # NOQA >>> ax = plt.gca(projection='3d') >>> ax.cla() >>> # >>> for path in paths: >>> time = np.arange(len(path)) >>> ax.plot(time, path.T[0] * 1, path.T[1] * 1, 'o-') >>> ax.set_xlim(0, num_frames) >>> ax.set_ylim(-.01, 1.01) >>> ax.set_zlim(-.01, 1.01) >>> ax.set_xlabel('x') >>> ax.set_ylabel('y') >>> ax.set_zlabel('z')