Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added updated guide and images #241

Merged
merged 4 commits into from
Dec 24, 2018
Merged

Added updated guide and images #241

merged 4 commits into from
Dec 24, 2018

Conversation

aschernov
Copy link
Contributor

No description provided.

@nmanovic
Copy link
Contributor

@bsekachev , could you please look at the PR as well?

@nmanovic nmanovic added this to the 0.3.0 - Release milestone Dec 19, 2018
@bsekachev
Copy link
Member

Basic navigation section:
3. An image can be moved/shifted by holding left mouse button inside some area without annotated objects. If Shift key is pressed then all annotated objects are ignored otherwise a highlighted bounding box will be moved instead of the image itself. Usually the functionality is used together with zoom to precisely locate an object of interest.

Shift key no longer working. Use mouse wheel button instead

@bsekachev
Copy link
Member

bsekachev commented Dec 19, 2018

Annotation mode (basics) section
3. In the list of objects you can see the labeled car. In the side panel you can perform basic operations under the object.

There is on an image "type" is also an attribute, But I suppose it's presented like label

@bsekachev
Copy link
Member

Shortcuts table:

+/- | change relative order of highlighted polygon

Not only for polygons. It works for any shapes. And only if "z_order" enabled.

@bsekachev
Copy link
Member

Open Menu button:
Picture is obsolete. It interface has been updated.

@bsekachev
Copy link
Member

I haven't found information about job/task status. It can be changed now.

@nmanovic nmanovic changed the base branch from develop to release-0.3 December 20, 2018 15:38

2. After that press ``Open Menu`` and then ``Dump Annotation`` button.

![](static/documentation/images/image028.jpg)

3. The annotation will be written into **.xml** file. To find the annotation file go to the directory where your browser saves downloaded files by default. For more information visit [.xml format page](/documentation/xml_format.html).
3. The annotation will be written into **.xml** file. To find the annotation file go to the directory where your browser saves downloaded files by default. For more information visit [.xml format page](./documentation/xml_format.html).

![](static/documentation/images/image029.jpg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update screenshot for dump file.

---
__Fill Opacity slider__

Changes opacity of every Bounding Box in the Annotation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use lower case for bounding box and annotation in the documentation. I don't see any reason why it starts with upper case.


![](static/documentation/images/image086.jpg)

Opacity slides right to left, starting from ``white borders`` and ``100-80-60-40-20% white body`` to ``colored borders`` and ``0-20-40-60-80-100% colored body``.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too complex explanation.


__Selected Fill Opacity slider__

Changes opacity of Bounding Box under mouse pointer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change


![](static/documentation/images/image087.jpg)

Opacity slides from 0% to 100% colored body
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opacity can be changed from 0% till 100%


Changes color scheme of Annotation:
- ``Instance`` — every Bounding Box have random color
- ![](static/documentation/images/image095.jpg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use - prefix before an image.

- ``Instance`` — every Bounding Box have random color
- ![](static/documentation/images/image095.jpg)
- ``Group`` — every group of Boxes have its own random color, ungrouped Boxes are white
- ![](static/documentation/images/image094.jpg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use - prefix before an image.

- ``Group`` — every group of Boxes have its own random color, ungrouped Boxes are white
- ![](static/documentation/images/image094.jpg)
- ``Label`` — every Label (i.e. Vehicle, Pedestrian, Roadmark) have its own random color
- ![](static/documentation/images/image093.jpg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use - prefix before an image.

Changes color scheme of Annotation:
- ``Instance`` — every Bounding Box have random color
- ![](static/documentation/images/image095.jpg)
- ``Group`` — every group of Boxes have its own random color, ungrouped Boxes are white
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't use upper case in the middle of a sentence. What is a reason for that?

@@ -627,6 +680,18 @@ Example | Description
``face[attr/glass="sunglass" or attr/glass="no"]`` | faces with sunglasses or without glasses at all.
```person[attr/race="asian"] | car[attr/model="bmw" or attr/model="mazda"]``` | asian persons or bmw or mazda cars.

## Analitics

If you press ``F3``, URL like Kibana Analytics App will opens in the next tab.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you press F3, a new tab with analytics and logs will be opened.


If you press ``F3``, URL like Kibana Analytics App will opens in the next tab.

It allows to see how much Working Time every User spend on each Task and how much they did, over any time range.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why "Working Time"? Why not "working time"?

@nmanovic nmanovic changed the title Added updated guide and images WIP: Added updated guide and images Dec 20, 2018
@nmanovic nmanovic changed the title WIP: Added updated guide and images Added updated guide and images Dec 24, 2018
@nmanovic nmanovic merged commit 8c51ad8 into cvat-ai:release-0.3 Dec 24, 2018
nmanovic added a commit that referenced this pull request Dec 29, 2018
* Bug has been fixed: impossible to lock/occlude object in AAM
* Bug has been fixed: invisible points actually are visible
* Bug has been fixed: impossible to close points after editing (#98)
* doc: grammatical cleanup of README.md (#107)
* Add info about development environment into CONTRIBUTING.md (#110)
* Now we store virtual URL instead of update it in the browser address bar (#112)
* Copy URL, Frame URL and object URL functionality in a context menu
* Bug has been fixed: label UIs don't update after changelabel (#109)
* Common escape button for exit from creating/groupping/merging/pasting/aam
* Switch outside/keyframe shortkeys
* Fix django vulnerability (#121)
* Add analytics component (#118)
* Incremental save of annotations (#120)
* Create task timeout 1h -> 4h. (#136)
* OpenVino integration (#134)
* Update README.md (#138)
* Add an extra field into meta section of a dump file (#149)
* Job status was implemented (#153)
* Back link to task from annotation view (#156)
* Change a task with labels and attributes in admin panel (#157)
* Permissions per tasks and jobs (#185)
* Fix context menu, text visibility for small images (#202)
* Fixed: both context menu are opened simultaneously
* Fixed: shape can be unavailable behind text
* Fixed: invisible text outside frame
* Fix upload big xml files for tasks (#199)
* Add Questions section to Readme.md (#226)
* Fixed labels order (#242)
* Propagate behaviour has been updated in cases with a different resolution (#246)
* Updated the guide and images (#241)
* Fix number attribute for float numbers. (#258)
TOsmanov pushed a commit to TOsmanov/cvat that referenced this pull request Aug 23, 2021
* Make formats docs folder, move format docs

* Create COCO format documentation
TOsmanov pushed a commit to TOsmanov/cvat that referenced this pull request Aug 23, 2021
* Rename 'openvino' plugin to 'openvino_plugin' (cvat-ai#205)

Co-authored-by: Jihyeon Yi <jihyeon.yi@intel.com>

* Make remap labels more accurate, allow explicit label deletion, add docs, update tests (cvat-ai#203)

* Kate/handling multiple attributes and speed up detection split (cvat-ai#207)

* better handling multi-attributes for classification_split

* handling multi-attributes better for detection

* bugfix in calculating required number of images for splitting 2 correct side effect of the changes for re-id split

* allow multiple subsets with arbitrary names

* rename _is_number to _is_float and improve it

* Fix voc to coco example (cvat-ai#209)

* Fix export filtering

* update example in readme

* Fix export filename for LabelMe format (cvat-ai#200)

* change export filename for LabelMe format

* Allow simple merge for datasets with no labels

* Add a more complex test on relative paths

* Support escaping in attributes

* update changelog

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* split unlabeled data into subsets for task-specific splitters (cvat-ai#211)

* split unlabeled data into subsets for classification, detection. for re-id, 'not-supported' subsets for this data

* Fix image ext on saving in cvat format (cvat-ai#214)

* fix image saving in cvat format

* update changelog

* Label "face" for bounding boxes in Wider Face (cvat-ai#215)

* add face label

* update changelog

* Adding "difficult", "truncated", "occluded" attributes when converting to Pascal VOC if they are not present (cvat-ai#216)

* remove check for 'difficult' attribute

* remove check for 'truncated' and 'occluded' attributes

* update changelog

* Ignore empty lines in YOLO annotations (cvat-ai#221)

* Ignore empty lines in yolo annotations

* Add type hints for image class, catch image opening errors in image.size

* update changelog

* Classification task in LFW dataset format (cvat-ai#222)

* add classification

* update changelog

* update documentation

* Add splitter for segmentation task  (cvat-ai#223)

* added segmentation_split

* updated changelog

* rename reidentification to reid

* Support for CIFAR-10/100 format (cvat-ai#225)

* add CIFAR dataset format

* add CIFAR to documentation

* update Changelog

* add validation item for instance segmentation (cvat-ai#227)

* add validation item for instance segmentation

* Add panoptic and stuff COCO format (cvat-ai#210)

* add coco stuff and panoptic formats

* update CHANGELOG

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* update detection splitter algorithm from # of samples to # of instances (cvat-ai#235)

* add documentation for validator (cvat-ai#233)

* add documentation for validator

* add validation item description (cvat-ai#237)

* Fix converter for Pascal VOC format (cvat-ai#239)

* User documentation for Pascal VOC format (cvat-ai#228)

* add user documentation for Pascal VOC format

* add integration tests

* update changelog

* Support for MNIST dataset format (cvat-ai#234)

* add mnist format

* add mnist csv format

* add mnist to documentation

* make formats docs folder, create COCO format documentation (cvat-ai#241)

* Make formats docs folder, move format docs

* Create COCO format documentation

* Fixes in CIFAR dataset format (cvat-ai#243)

* Add folder creation

* Update changelog

* Add user documentation file and integration tests for YOLO format (cvat-ai#246)

* add user documentation file for yolo

* add integraion tests

* update user manual

* update changelog

* Add Cityscapes format (cvat-ai#249)

* add cityscapes format

* add format docs

* update changelog

* Fix saving attribute in WiderFace extractor (cvat-ai#251)

* add fixes

* update changelog

* Fix spelling errors (cvat-ai#252)

* Configurable Threshold CLI support (cvat-ai#250)

* add validator cli

* add configurable validator threshold

* update changelog

* CI. Move to GitHub actions. (cvat-ai#263)

* Moving to GitHub Actions

* Sending a coverage report if python3.6 (cvat-ai#264)

* Rename workflows (cvat-ai#265)

* Rename workflows

* Update repo config and badge (cvat-ai#266)

* Update PR template

* Update build status badge

* Fix deprecation warnings (cvat-ai#270)

* Update RISE docs (cvat-ai#255)

* Update rise docs

* Update cli help

* Pytest related changes (cvat-ai#248)

* Tests moved to pytest. Updated CI. Updated requirements.

* Updated contribution guide

* Added annotations for tests

* Updated tests

* Added code style guide

* Fix CI (cvat-ai#272)

* Fix script call

* change script call to binary call

* Fix help program name, add mark_bug (cvat-ai#275)

* Fix prog name

* Add mark_bug test annotation

* Fix labelmap parameter in CamVid (cvat-ai#262)

* Fix labelmap parameter in camvid

* Release 0.1.9 (dev) (cvat-ai#276)

* Update version

* Update changelog

* Fix numpy conflict (cvat-ai#278)

Co-authored-by: Emily Chun <emily.chun@intel.com>
Co-authored-by: Jihyeon Yi <jihyeon.yi@intel.com>
Co-authored-by: Kirill Sizov <kirill.sizov@intel.com>
Co-authored-by: Anastasia Yasakova <anastasia.yasakova@intel.com>
Co-authored-by: Harim Kang <harimx.kang@intel.com>
Co-authored-by: Zoya Maslova <zoya.maslova@intel.com>
Co-authored-by: Roman Donchenko <roman.donchenko@intel.com>
Co-authored-by: Seungyoon Woo <seung.woo@intel.com>
Co-authored-by: Dmitry Kruchinin <33020454+dvkruchinin@users.noreply.github.com>
Co-authored-by: Slawomir Strehlke <slawomir.strehlke@intel.com>
TOsmanov pushed a commit to TOsmanov/cvat that referenced this pull request Aug 23, 2021
* Rename 'openvino' plugin to 'openvino_plugin' (cvat-ai#205)

Co-authored-by: Jihyeon Yi <jihyeon.yi@intel.com>

* Make remap labels more accurate, allow explicit label deletion, add docs, update tests (cvat-ai#203)

* Kate/handling multiple attributes and speed up detection split (cvat-ai#207)

* better handling multi-attributes for classification_split

* handling multi-attributes better for detection

* bugfix in calculating required number of images for splitting 2 correct side effect of the changes for re-id split

* allow multiple subsets with arbitrary names

* rename _is_number to _is_float and improve it

* Fix voc to coco example (cvat-ai#209)

* Fix export filtering

* update example in readme

* Fix export filename for LabelMe format (cvat-ai#200)

* change export filename for LabelMe format

* Allow simple merge for datasets with no labels

* Add a more complex test on relative paths

* Support escaping in attributes

* update changelog

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* split unlabeled data into subsets for task-specific splitters (cvat-ai#211)

* split unlabeled data into subsets for classification, detection. for re-id, 'not-supported' subsets for this data

* Fix image ext on saving in cvat format (cvat-ai#214)

* fix image saving in cvat format

* update changelog

* Label "face" for bounding boxes in Wider Face (cvat-ai#215)

* add face label

* update changelog

* Adding "difficult", "truncated", "occluded" attributes when converting to Pascal VOC if they are not present (cvat-ai#216)

* remove check for 'difficult' attribute

* remove check for 'truncated' and 'occluded' attributes

* update changelog

* Ignore empty lines in YOLO annotations (cvat-ai#221)

* Ignore empty lines in yolo annotations

* Add type hints for image class, catch image opening errors in image.size

* update changelog

* Classification task in LFW dataset format (cvat-ai#222)

* add classification

* update changelog

* update documentation

* Add splitter for segmentation task  (cvat-ai#223)

* added segmentation_split

* updated changelog

* rename reidentification to reid

* Support for CIFAR-10/100 format (cvat-ai#225)

* add CIFAR dataset format

* add CIFAR to documentation

* update Changelog

* add validation item for instance segmentation (cvat-ai#227)

* add validation item for instance segmentation

* Add panoptic and stuff COCO format (cvat-ai#210)

* add coco stuff and panoptic formats

* update CHANGELOG

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* update detection splitter algorithm from # of samples to # of instances (cvat-ai#235)

* add documentation for validator (cvat-ai#233)

* add documentation for validator

* add validation item description (cvat-ai#237)

* Fix converter for Pascal VOC format (cvat-ai#239)

* User documentation for Pascal VOC format (cvat-ai#228)

* add user documentation for Pascal VOC format

* add integration tests

* update changelog

* Support for MNIST dataset format (cvat-ai#234)

* add mnist format

* add mnist csv format

* add mnist to documentation

* make formats docs folder, create COCO format documentation (cvat-ai#241)

* Make formats docs folder, move format docs

* Create COCO format documentation

* Fixes in CIFAR dataset format (cvat-ai#243)

* Add folder creation

* Update changelog

* Add user documentation file and integration tests for YOLO format (cvat-ai#246)

* add user documentation file for yolo

* add integraion tests

* update user manual

* update changelog

* Add Cityscapes format (cvat-ai#249)

* add cityscapes format

* add format docs

* update changelog

* Fix saving attribute in WiderFace extractor (cvat-ai#251)

* add fixes

* update changelog

* Fix spelling errors (cvat-ai#252)

* Configurable Threshold CLI support (cvat-ai#250)

* add validator cli

* add configurable validator threshold

* update changelog

* CI. Move to GitHub actions. (cvat-ai#263)

* Moving to GitHub Actions

* Sending a coverage report if python3.6 (cvat-ai#264)

* Rename workflows (cvat-ai#265)

* Rename workflows

* Update repo config and badge (cvat-ai#266)

* Update PR template

* Update build status badge

* Fix deprecation warnings (cvat-ai#270)

* Update RISE docs (cvat-ai#255)

* Update rise docs

* Update cli help

* Pytest related changes (cvat-ai#248)

* Tests moved to pytest. Updated CI. Updated requirements.

* Updated contribution guide

* Added annotations for tests

* Updated tests

* Added code style guide

* Fix CI (cvat-ai#272)

* Fix script call

* change script call to binary call

* Fix help program name, add mark_bug (cvat-ai#275)

* Fix prog name

* Add mark_bug test annotation

* Fix labelmap parameter in CamVid (cvat-ai#262)

* Fix labelmap parameter in camvid

* Release 0.1.9 (dev) (cvat-ai#276)

* Update version

* Update changelog

* Fix numpy conflict (cvat-ai#278)

* Add changelog stub (cvat-ai#279)

* tests/requirements.py: remove the test_wrapper functions (cvat-ai#285)

* Subformat importers for VOC and COCO (cvat-ai#281)

* Document find_sources

* Add VOC subformat importers

* Add coco subformat importers

* Fix LFW

* Reduce voc detect dataset cases

* Reorganize coco tests, add subformat tests

* Fix default subset handling in Dataset

* Fix getting subset

* Fix coco tests

* Fix voc tests

* Update changelog

* Add image zip format (cvat-ai#273)

* add tests

* add image_zip format

* update changelog

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Add KITTI detection and segmentation formats (cvat-ai#282)

* Add KITTI detection and segmentation formats

* Remove unused import

* Add KITTI user manual

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Fix loading file and image processing in CIFAR (cvat-ai#284)

* Fix image layout and encoding problems

* Update Changelog

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* CLI tests for convert command for VOC dataset (cvat-ai#286)

* Add tests for convert command

* Convert most enum definitions from the functional style to the class style (cvat-ai#290)

* yolo format documentation update (cvat-ai#295)

* add info about coordinates in yolo format doc

* Fix merged dataset item filtering (cvat-ai#258)

* Add tests

* Fix xpathfilter transform

* Update changelog

* Sms/pytest marking cityscapes and zip (cvat-ai#298)

* Updated pytest marking for cityscapes and imagezip.

* Introduce Validator plugin type (cvat-ai#299)

* Introduce Validator plugin type

* Fix validator definitions (cvat-ai#303)

* update changelog

* Fixes in validator definitions

* Update validator cli

* Make TF availability check optional (cvat-ai#305)

* Make tf availability check optional

* update changelog

* Update pylint (cvat-ai#304)

* Add import order check in pylint

* Fix some linter problems

* Remove warning suppression comments

* Add lazy loading for builtin plugins (cvat-ai#306)

* Refactor env code

* Load builtin plugins lazily

* update changelog

* Update transforms handling in Dataset (cvat-ai#297)

* Update builtin transforms

* Optimize dataset length computation when no source

* Add filter test

* Fix transforms affecting categories

* Optimize categories transforms

* Update filters

* fix imports

* Avoid using default docstrings in plugins

* Fix patch saving in VOC, add keep_empty export parameter

* Fix flush_changes

* Fix removed images and subsets in dataset patch

* Update changelog

* Update voc doc

* Skip item transform base class in plugins

* Readable COCO and datumaro format for CJK (cvat-ai#307)

* Do not force ASCII in COCO and Datumaro JSONs for readable CJK

* Add tests

* Use utf-8 encoding for writing

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Force utf-8 everywhere (cvat-ai#309)

* Fix in ImageNet_txt (cvat-ai#302)

* Add extensions for images to annotation file

* Remove image search in extractor

* Update changelog

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Reduce duplication of dependency information (cvat-ai#308)

* Move requirements from setup.py to requirements-base.txt

* Add whitespace error checking to GitHub Actions (cvat-ai#311)

* Fix whitespace errors

As detected with `git diff --check`.

* Add a job to check for whitespace errors

I called it "lint" so that other checks could be added to it later.

* Bump copyright years in changed files

* Add initial support for the Open Images dataset (cvat-ai#291)

* Support reading or Labels in Open Images (v4, v5, v6)

* Add tests for the Open Images extractor/importer

* Add Open Images documentation

* Update changelog

* Fix tensorboardX dependency (cvat-ai#318)

* Fixing remark-lint issues. Adding remark-linter check. (cvat-ai#321)

* Fix remark-lint issues.

* Align continuation lines with the first line.

Apply comments

* Added remark check

* Add an upper bound on the Pillow dependency to work around a regression in 8.3 (cvat-ai#323)

* open_images_user_manual.md: fix image description file URLs

I accidentally swapped the URLs for test and validation sets.

* Fix COCO Panoptic (cvat-ai#319)

* add test

* Fix integer overflow in bgr2index

* Fix pylint issues. Added pylint checking. (cvat-ai#322)

* Added pylint job for CI

* Rework pip install

* Fixed remaining pylint warnings

Co-authored-by: Andrey Zhavoronkov <andrey.zhavoronkov@intel.com>

* Open Images: add writing support (cvat-ai#315)

* open_images_user_manual.md: fix image description file URLs

* open_images_format: add conversion support

* open_images_format: add support for images in subdirectories

* open_images_format: add tests for writing support

* open_images_format: add documentation for the writing support

* Update the changelog entry for the Open Images support

* Add python bandit checks. (cvat-ai#316)

* Add bandit dependency

* Add bandit checks on CI

* Disable some warnings

Co-authored-by: Andrey Zhavoronkov <andrey.zhavoronkov@intel.com>
Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Remove Pylint unused-import warning suppressions (cvat-ai#326)

* Remove Pylint unused-import warning suppressions

* Add a job to check import formatting using isort (cvat-ai#333)

* Reformat all imports using isort

* Implement a workflow for checking import formatting based on isort

* Reformat the enabled checker list in .pylintrc (cvat-ai#335)

Put each code on its own line and add a comment with its symbolic name.
That makes the list more understandable and easier to edit.

* Merge all linting jobs into one workflow file (cvat-ai#331)

Doing it this way means that on GitHub's Checks page, all jobs are displayed
under one "Linter" category, instead of multiple indistinguishable "Linter"
categories with one job each.

Move the whitespace checking job into the Linter workflow as well, since
that's where it logically belongs.

I also took the opportunity to slightly rename the jobs in order to spell
the linter names correctly.

* Fix cuboids / 3d / M6 (cvat-ai#320)

* CVAT-3D Milestone-6: Added Supervisely Point Cloud and KITTI Raw 3D formats

* Added Cuboid3d annotations

* Added docs for new formats

Co-authored-by: cdp <cdp123>
Co-authored-by: Jayraj <jayrajsolanki96@gmail.com>
Co-authored-by: Roman Donchenko <roman.donchenko@intel.com>

* Clean up .pylintrc (cvat-ai#340)

* Clean up the list of messages in .pylintrc

* Remove obsolete Pylint options

* .pylintrc: move the disable setting and its documentation together

* Remove the commented-out setting.

* Revert "Add an upper bound on the Pillow dependency to work around a regression in 8.3 (cvat-ai#323)" (cvat-ai#341)

The regression was fixed in 8.3.1.

This reverts commit 9a85616.

* Enable pylint checkers that find invalid escape sequences (cvat-ai#344)

Fix the issues that they found.

* Factor out the images.meta loading code from YoloExtractor (cvat-ai#343)

* Factor out the images.meta loading code from YoloExtractor

It looks like the same thing will be needed for Open Images, so I'm
moving it to a common module.

* Rework image.meta parsing code to use shell syntax

This allows comments and improves extensibility.

* Support for CIFAR-100 (cvat-ai#301)

* Add support for CIFAR-100

* Update Changelog

* Update user_manual.md

* Add notes about differences in formats

* Fix importing for VGG Face 2 (cvat-ai#345)

* correct asset according the original vgg_face2 dataset

* fix importing of the original dataset

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Dataset caching fixes (cvat-ai#351)

* Fix importing arbitrary file names in COCO subformats

* Optimize subset iteration in a simple scenario

* Fix subset iteration in dataset with transforms

* Cuboid 3D for Datumaro format (cvat-ai#349)

* Support cuboid_3d and point cloud in datumaro format

* Add cuboid_3d and point cloud tests in datumaro format

* Add image size type conversions

Co-authored-by: Maxim Zhiltsov <maxim.zhiltsov@intel.com>

* Add e2e tests for cuboids (cvat-ai#353)

* Add attr name check in kitti raw

* Add sly pcd e2e test

* Rename "object" attribute to "track_id" in sly point cloud

* Add kitti raw e2e test

* Update kitti raw example

* update changelog

* Release 0.1.10 (dev) (cvat-ai#354)

* Update changelog

* Add cifar security notice

* Update version

Co-authored-by: Emily Chun <emily.chun@intel.com>
Co-authored-by: Jihyeon Yi <jihyeon.yi@intel.com>
Co-authored-by: Kirill Sizov <kirill.sizov@intel.com>
Co-authored-by: Anastasia Yasakova <anastasia.yasakova@intel.com>
Co-authored-by: Harim Kang <harimx.kang@intel.com>
Co-authored-by: Zoya Maslova <zoya.maslova@intel.com>
Co-authored-by: Roman Donchenko <roman.donchenko@intel.com>
Co-authored-by: Seungyoon Woo <seung.woo@intel.com>
Co-authored-by: Dmitry Kruchinin <33020454+dvkruchinin@users.noreply.github.com>
Co-authored-by: Slawomir Strehlke <slawomir.strehlke@intel.com>
Co-authored-by: Jaesun Park <diligensloth@gmail.com>
Co-authored-by: Andrey Zhavoronkov <andrey.zhavoronkov@intel.com>
Co-authored-by: Jayraj <jayrajsolanki96@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants