diff --git a/.github/ISSUE_TEMPLATE/----.md b/.github/ISSUE_TEMPLATE/----.md index 3a90404e9de..818888751c5 100644 --- a/.github/ISSUE_TEMPLATE/----.md +++ b/.github/ISSUE_TEMPLATE/----.md @@ -15,12 +15,12 @@ assignees: '' ### 描述你遇到的问题 -\[填写这里\] +[填写这里] ### 相关信息 1. `pip list | grep "mmcv\|mmcls\|^torch"` 命令的输出 - \[填写这里\] + [填写这里] 2. 如果你修改了,或者使用了新的配置文件,请在这里写明 ```python @@ -28,6 +28,6 @@ assignees: '' ``` 3. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息 - \[填写这里\] + [填写这里] 4. 如果你对 `mmcls` 文件夹下的代码做了其他相关的修改,请在这里写明 - \[填写这里\] + [填写这里] diff --git a/.github/ISSUE_TEMPLATE/---.md b/.github/ISSUE_TEMPLATE/---.md index fe91547056b..9cae1c5e550 100644 --- a/.github/ISSUE_TEMPLATE/---.md +++ b/.github/ISSUE_TEMPLATE/---.md @@ -10,7 +10,7 @@ assignees: '' ### 描述这个功能 -\[填写这里\] +[填写这里] ### 动机 @@ -18,17 +18,17 @@ assignees: '' 例 1. 现在进行 xxx 的时候不方便 例 2. 最近的论文中提出了有一个很有帮助的 xx -\[填写这里\] +[填写这里] ### 相关资源 是否有相关的官方实现或者第三方实现?这些会很有参考意义。 -\[填写这里\] +[填写这里] ### 其他相关信息 其他和这个功能相关的信息或者截图,请放在这里。 另外如果你愿意参与实现这个功能并提交 PR,请在这里说明,我们将非常欢迎。 -\[填写这里\] +[填写这里] diff --git a/.github/ISSUE_TEMPLATE/---bug.md b/.github/ISSUE_TEMPLATE/---bug.md index a3ec4988c65..681bd068b80 100644 --- a/.github/ISSUE_TEMPLATE/---bug.md +++ b/.github/ISSUE_TEMPLATE/---bug.md @@ -12,7 +12,7 @@ assignees: '' 简单地描述一下遇到了什么 bug -\[填写这里\] +[填写这里] ### 复现流程 @@ -25,7 +25,7 @@ assignees: '' ### 相关信息 1. `pip list | grep "mmcv\|mmcls\|^torch"` 命令的输出 - \[填写这里\] + [填写这里] 2. 如果你修改了,或者使用了新的配置文件,请在这里写明 ```python @@ -33,12 +33,12 @@ assignees: '' ``` 3. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息 - \[填写这里\] + [填写这里] 4. 如果你对 `mmcls` 文件夹下的代码做了其他相关的修改,请在这里写明 - \[填写这里\] + [填写这里] ### 附加内容 任何其他有关该 bug 的信息、截图等 -\[填写这里\] +[填写这里] diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index c00c1f59600..8827d5d1a03 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -10,7 +10,7 @@ assignees: '' A clear and concise description of what the bug is. -\[here\] +[here] ### To Reproduce @@ -23,7 +23,7 @@ The command you executed. ### Post related information 1. The output of `pip list | grep "mmcv\|mmcls\|^torch"` - \[here\] + [here] 2. Your config file if you modified it or created a new one. ```python @@ -31,12 +31,12 @@ The command you executed. ``` 3. Your train log file if you meet the problem during training. - \[here\] + [here] 4. Other code you modified in the `mmcls` folder. - \[here\] + [here] ### Additional context Add any other context about the problem here. -\[here\] +[here] diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 23b7c097b8c..7db63e1b6cd 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -8,25 +8,25 @@ assignees: '' ### Describe the feature -\[here\] +[here] ### Motivation A clear and concise description of the motivation of the feature. -Ex1. It is inconvenient when \[....\]. -Ex2. There is a recent paper \[....\], which is very helpful for \[....\]. +Ex1. It is inconvenient when [....]. +Ex2. There is a recent paper [....], which is very helpful for [....]. -\[here\] +[here] ### Related resources If there is an official code release or third-party implementation, please also provide the information here, which would be very helpful. -\[here\] +[here] ### Additional context Add any other context or screenshots about the feature request here. If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated. -\[here\] +[here] diff --git a/.github/ISSUE_TEMPLATE/general-questions.md b/.github/ISSUE_TEMPLATE/general-questions.md index 42d5fb2e4c2..ddf19df45e9 100644 --- a/.github/ISSUE_TEMPLATE/general-questions.md +++ b/.github/ISSUE_TEMPLATE/general-questions.md @@ -13,12 +13,12 @@ assignees: '' ### Describe the question you meet -\[here\] +[here] ### Post related information 1. The output of `pip list | grep "mmcv\|mmcls\|^torch"` - \[here\] + [here] 2. Your config file if you modified it or created a new one. ```python @@ -26,6 +26,6 @@ assignees: '' ``` 3. Your train log file if you meet the problem during training. - \[here\] + [here] 4. Other code you modified in the `mmcls` folder. - \[here\] + [here] diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 1f1e9ad54a0..7c3074f7cfa 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -29,7 +29,7 @@ repos: rev: 0.7.9 hooks: - id: mdformat - args: ["--number", "--table-width", "200", '--disable-escape', 'backslash'] + args: ["--number", "--table-width", "200", '--disable-escape', 'backslash', '--disable-escape', 'link-enclosure'] additional_dependencies: - "mdformat-openmmlab>=0.0.4" - mdformat_frontmatter diff --git a/README.md b/README.md index c8ff976bb33..a8268f39ee2 100644 --- a/README.md +++ b/README.md @@ -58,18 +58,18 @@ The `1.x` branch works with **PyTorch 1.6+**. ## What's new +v1.0.0rc3 was released in 21/11/2022. + +- Add **Switch Recipe** Hook, Now we can modify training pipeline, mixup and loss settings during training, see [#1101](https://github.com/open-mmlab/mmclassification/pull/1101). +- Add **TIMM and HuggingFace** wrappers. Now you can train/use models in TIMM/HuggingFace directly, see [#1102](https://github.com/open-mmlab/mmclassification/pull/1102). +- Support **retrieval tasks**, see [#1055](https://github.com/open-mmlab/mmclassification/pull/1055). +- Reproduce **mobileone** training accuracy. See [#1191](https://github.com/open-mmlab/mmclassification/pull/1191) + v1.0.0rc2 was released in 12/10/2022. - Support Deit-3 backbone. - Fix MMEngine version requirements. -v1.0.0rc1 was released in 30/9/2022. - -- Support MViT, EdgeNeXt, Swin-Transformer V2, EfficientFormer and MobileOne. -- Support BEiT type transformer layer. - -v1.0.0rc0 was released in 31/8/2022. - This release introduced a brand new and flexible training & test engine, but it's still in progress. Welcome to try according to [the documentation](https://mmclassification.readthedocs.io/en/1.x/). diff --git a/README_zh-CN.md b/README_zh-CN.md index 555e3b449ec..e5f1e04e044 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -57,6 +57,13 @@ MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [O ## 更新日志 +2022/11/21 发布了 v1.0.0rc3 版本 + +- 添加了 **Switch Recipe Hook**,现在我们可以在训练过程中修改数据增强、Mixup设置、loss设置等 +- 添加了 **TIMM 和 HuggingFace** 包装器,现在我们可以直接训练、使用 TIMM 和 HuggingFace 中的模型 +- 支持了检索任务 +- 复现了 **MobileOne** 训练精度 + 2022/10/12 发布了 v1.0.0rc2 版本 - 支持了 Deit-3 主干网络 diff --git a/configs/_base_/datasets/cifar100_bs16.py b/configs/_base_/datasets/cifar100_bs16.py index 78a74fa69ee..86ac33a4662 100644 --- a/configs/_base_/datasets/cifar100_bs16.py +++ b/configs/_base_/datasets/cifar100_bs16.py @@ -27,7 +27,6 @@ test_mode=False, pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -39,7 +38,6 @@ test_mode=True, pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, )) diff --git a/configs/_base_/datasets/cifar10_bs16.py b/configs/_base_/datasets/cifar10_bs16.py index f29cfcd93c8..cbd191c58c8 100644 --- a/configs/_base_/datasets/cifar10_bs16.py +++ b/configs/_base_/datasets/cifar10_bs16.py @@ -27,7 +27,6 @@ test_mode=False, pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -39,7 +38,6 @@ test_mode=True, pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, )) diff --git a/configs/_base_/datasets/cub_bs8_384.py b/configs/_base_/datasets/cub_bs8_384.py index 17139dcb0f1..d896d96121e 100644 --- a/configs/_base_/datasets/cub_bs8_384.py +++ b/configs/_base_/datasets/cub_bs8_384.py @@ -33,7 +33,6 @@ test_mode=False, pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -45,7 +44,6 @@ test_mode=True, pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, )) diff --git a/configs/_base_/datasets/cub_bs8_448.py b/configs/_base_/datasets/cub_bs8_448.py index 0b07a1a0890..b990b6290aa 100644 --- a/configs/_base_/datasets/cub_bs8_448.py +++ b/configs/_base_/datasets/cub_bs8_448.py @@ -32,7 +32,6 @@ test_mode=False, pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -44,7 +43,6 @@ test_mode=True, pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, )) diff --git a/configs/_base_/datasets/imagenet21k_bs128.py b/configs/_base_/datasets/imagenet21k_bs128.py index 0f24b8a0513..84716257de0 100644 --- a/configs/_base_/datasets/imagenet21k_bs128.py +++ b/configs/_base_/datasets/imagenet21k_bs128.py @@ -33,7 +33,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -46,7 +45,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs128_mbv3.py b/configs/_base_/datasets/imagenet_bs128_mbv3.py index d64f258b971..ae90fa037dd 100644 --- a/configs/_base_/datasets/imagenet_bs128_mbv3.py +++ b/configs/_base_/datasets/imagenet_bs128_mbv3.py @@ -48,7 +48,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -61,7 +60,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs128_poolformer_medium_224.py b/configs/_base_/datasets/imagenet_bs128_poolformer_medium_224.py index 1f03d96dac7..3e33d303692 100644 --- a/configs/_base_/datasets/imagenet_bs128_poolformer_medium_224.py +++ b/configs/_base_/datasets/imagenet_bs128_poolformer_medium_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs128_poolformer_small_224.py b/configs/_base_/datasets/imagenet_bs128_poolformer_small_224.py index d8785707adb..b61de03b873 100644 --- a/configs/_base_/datasets/imagenet_bs128_poolformer_small_224.py +++ b/configs/_base_/datasets/imagenet_bs128_poolformer_small_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs16_pil_bicubic_384.py b/configs/_base_/datasets/imagenet_bs16_pil_bicubic_384.py index 4ca5c828d16..9bb3f83da5d 100644 --- a/configs/_base_/datasets/imagenet_bs16_pil_bicubic_384.py +++ b/configs/_base_/datasets/imagenet_bs16_pil_bicubic_384.py @@ -35,7 +35,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -48,7 +47,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs256_davit_224.py b/configs/_base_/datasets/imagenet_bs256_davit_224.py index faf46523a84..7dbb6c3c41a 100644 --- a/configs/_base_/datasets/imagenet_bs256_davit_224.py +++ b/configs/_base_/datasets/imagenet_bs256_davit_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs256_rsb_a12.py b/configs/_base_/datasets/imagenet_bs256_rsb_a12.py index 3038d46ad6c..77b179f95ce 100644 --- a/configs/_base_/datasets/imagenet_bs256_rsb_a12.py +++ b/configs/_base_/datasets/imagenet_bs256_rsb_a12.py @@ -54,7 +54,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -67,7 +66,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs256_rsb_a3.py b/configs/_base_/datasets/imagenet_bs256_rsb_a3.py index 53a17c20270..8f3d1a48588 100644 --- a/configs/_base_/datasets/imagenet_bs256_rsb_a3.py +++ b/configs/_base_/datasets/imagenet_bs256_rsb_a3.py @@ -54,7 +54,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -67,7 +66,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs32.py b/configs/_base_/datasets/imagenet_bs32.py index 5bfa94aafa5..4b3b4ba2178 100644 --- a/configs/_base_/datasets/imagenet_bs32.py +++ b/configs/_base_/datasets/imagenet_bs32.py @@ -33,7 +33,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -46,7 +45,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs32_pil_bicubic.py b/configs/_base_/datasets/imagenet_bs32_pil_bicubic.py index aa34c574d83..d54838763a0 100644 --- a/configs/_base_/datasets/imagenet_bs32_pil_bicubic.py +++ b/configs/_base_/datasets/imagenet_bs32_pil_bicubic.py @@ -42,7 +42,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -55,7 +54,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs32_pil_resize.py b/configs/_base_/datasets/imagenet_bs32_pil_resize.py index 48234eb166d..2db8f89b2e8 100644 --- a/configs/_base_/datasets/imagenet_bs32_pil_resize.py +++ b/configs/_base_/datasets/imagenet_bs32_pil_resize.py @@ -33,7 +33,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -46,7 +45,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64.py b/configs/_base_/datasets/imagenet_bs64.py index ea2db282adc..bb80a1f532f 100644 --- a/configs/_base_/datasets/imagenet_bs64.py +++ b/configs/_base_/datasets/imagenet_bs64.py @@ -33,7 +33,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -46,7 +45,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_autoaug.py b/configs/_base_/datasets/imagenet_bs64_autoaug.py index 2d4c4469691..196dec820b2 100644 --- a/configs/_base_/datasets/imagenet_bs64_autoaug.py +++ b/configs/_base_/datasets/imagenet_bs64_autoaug.py @@ -41,7 +41,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -54,7 +53,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_convmixer_224.py b/configs/_base_/datasets/imagenet_bs64_convmixer_224.py index 14932cfb321..0a30815db80 100644 --- a/configs/_base_/datasets/imagenet_bs64_convmixer_224.py +++ b/configs/_base_/datasets/imagenet_bs64_convmixer_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_deit3_224.py b/configs/_base_/datasets/imagenet_bs64_deit3_224.py index 430e1fcf683..60a882d23f3 100644 --- a/configs/_base_/datasets/imagenet_bs64_deit3_224.py +++ b/configs/_base_/datasets/imagenet_bs64_deit3_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_deit3_384.py b/configs/_base_/datasets/imagenet_bs64_deit3_384.py index b9cd29270a9..9b8c73ad25c 100644 --- a/configs/_base_/datasets/imagenet_bs64_deit3_384.py +++ b/configs/_base_/datasets/imagenet_bs64_deit3_384.py @@ -42,7 +42,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -55,7 +54,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_edgenext_256.py b/configs/_base_/datasets/imagenet_bs64_edgenext_256.py index 0c9dd98e06a..df095b6bc5a 100644 --- a/configs/_base_/datasets/imagenet_bs64_edgenext_256.py +++ b/configs/_base_/datasets/imagenet_bs64_edgenext_256.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_mixer_224.py b/configs/_base_/datasets/imagenet_bs64_mixer_224.py index 9a4a6d44bd0..ddf07dc8c73 100644 --- a/configs/_base_/datasets/imagenet_bs64_mixer_224.py +++ b/configs/_base_/datasets/imagenet_bs64_mixer_224.py @@ -34,7 +34,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -47,7 +46,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_pil_resize.py b/configs/_base_/datasets/imagenet_bs64_pil_resize.py index 022dda52840..c97be68e803 100644 --- a/configs/_base_/datasets/imagenet_bs64_pil_resize.py +++ b/configs/_base_/datasets/imagenet_bs64_pil_resize.py @@ -33,7 +33,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -46,7 +45,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_pil_resize_autoaug.py b/configs/_base_/datasets/imagenet_bs64_pil_resize_autoaug.py index fd2709f4274..6244fbaa818 100644 --- a/configs/_base_/datasets/imagenet_bs64_pil_resize_autoaug.py +++ b/configs/_base_/datasets/imagenet_bs64_pil_resize_autoaug.py @@ -50,7 +50,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -63,7 +62,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_swin_224.py b/configs/_base_/datasets/imagenet_bs64_swin_224.py index 1a54932a473..39d716795e8 100644 --- a/configs/_base_/datasets/imagenet_bs64_swin_224.py +++ b/configs/_base_/datasets/imagenet_bs64_swin_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_swin_256.py b/configs/_base_/datasets/imagenet_bs64_swin_256.py index d3b15833fb5..79e2a1ca35d 100644 --- a/configs/_base_/datasets/imagenet_bs64_swin_256.py +++ b/configs/_base_/datasets/imagenet_bs64_swin_256.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_swin_384.py b/configs/_base_/datasets/imagenet_bs64_swin_384.py index 1e64f6aa86b..d4e9d3ff379 100644 --- a/configs/_base_/datasets/imagenet_bs64_swin_384.py +++ b/configs/_base_/datasets/imagenet_bs64_swin_384.py @@ -36,7 +36,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -49,7 +48,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs64_t2t_224.py b/configs/_base_/datasets/imagenet_bs64_t2t_224.py index 249806abe35..f3dc75abd29 100644 --- a/configs/_base_/datasets/imagenet_bs64_t2t_224.py +++ b/configs/_base_/datasets/imagenet_bs64_t2t_224.py @@ -62,7 +62,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -75,7 +74,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/imagenet_bs8_pil_bicubic_320.py b/configs/_base_/datasets/imagenet_bs8_pil_bicubic_320.py index f65e70d9d95..e776907d1ac 100644 --- a/configs/_base_/datasets/imagenet_bs8_pil_bicubic_320.py +++ b/configs/_base_/datasets/imagenet_bs8_pil_bicubic_320.py @@ -41,7 +41,6 @@ data_prefix='train', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -54,7 +53,6 @@ data_prefix='val', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, 5)) diff --git a/configs/_base_/datasets/voc_bs16.py b/configs/_base_/datasets/voc_bs16.py index 8a8b6d69056..dce46edb624 100644 --- a/configs/_base_/datasets/voc_bs16.py +++ b/configs/_base_/datasets/voc_bs16.py @@ -34,7 +34,6 @@ image_set_path='ImageSets/Layout/val.txt', pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -46,7 +45,6 @@ image_set_path='ImageSets/Layout/val.txt', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) test_dataloader = dict( @@ -58,7 +56,6 @@ image_set_path='ImageSets/Layout/val.txt', pipeline=test_pipeline), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) # calculate precision_recall_f1 and mAP diff --git a/configs/efficientnet/README.md b/configs/efficientnet/README.md index 625b57c5a54..c53ab5b68fc 100644 --- a/configs/efficientnet/README.md +++ b/configs/efficientnet/README.md @@ -66,8 +66,8 @@ Note: In MMClassification, we support training with AutoAugment, don't support A | EfficientNet-B7 (AA + AdvProp)\* | 66.35 | 39.3 | 85.14 | 97.23 | [config](./efficientnet-b7_8xb32-01norm_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b7_3rdparty_8xb32-aa-advprop_in1k_20220119-c6dbff10.pth) | | EfficientNet-B7 (RA + NoisyStudent)\* | 66.35 | 65.0 | 86.83 | 98.08 | [config](./efficientnet-b7_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b7_3rdparty-ra-noisystudent_in1k_20221103-a82894bc.pth) | | EfficientNet-B8 (AA + AdvProp)\* | 87.41 | 65.0 | 85.38 | 97.28 | [config](./efficientnet-b8_8xb32-01norm_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b8_3rdparty_8xb32-aa-advprop_in1k_20220119-297ce1b7.pth) | -| EfficientNet-L2-475 (RA + NoisyStudent)\* | 480.30 | 174.20 | 88.18 | 98.55 | [config](./efficientnet-l2-475_8xb8_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k-475px_20221103-5a0d8058.pth) | -| EfficientNet-L2 (RA + NoisyStudent)\* | 480.30 | 484.98 | 88.33 | 98.65 | [config](./efficientnet-l2_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k_20221103-be73be13.pth) | +| EfficientNet-L2-475 (RA + NoisyStudent)\* | 480.30 | 174.20 | 88.18 | 98.55 | [config](./efficientnet-l2_8xb32_in1k-475px.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k-475px_20221103-5a0d8058.pth) | +| EfficientNet-L2 (RA + NoisyStudent)\* | 480.30 | 484.98 | 88.33 | 98.65 | [config](./efficientnet-l2_8xb8_in1k-800px.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k_20221103-be73be13.pth) | *Models with * are converted from the [official repo](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). The config files of these models are only for inference. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.* diff --git a/configs/efficientnet/efficientnet-l2_8xb8_in1k.py b/configs/efficientnet/efficientnet-l2_8xb8_in1k-800px.py similarity index 100% rename from configs/efficientnet/efficientnet-l2_8xb8_in1k.py rename to configs/efficientnet/efficientnet-l2_8xb8_in1k-800px.py diff --git a/configs/efficientnet/metafile.yml b/configs/efficientnet/metafile.yml index aeb5488e0d6..ddfa71db9ec 100644 --- a/configs/efficientnet/metafile.yml +++ b/configs/efficientnet/metafile.yml @@ -517,7 +517,7 @@ Models: Converted From: Weights: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/advprop/efficientnet-b8.tar.gz Code: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet - - Name: efficientnet-l2_3rdparty-ra-noisystudent_in1k + - Name: efficientnet-l2_3rdparty-ra-noisystudent_in1k-800px Metadata: FLOPs: 174203533416 Parameters: 480309308 @@ -529,7 +529,7 @@ Models: Top 5 Accuracy: 98.65 Task: Image Classification Weights: https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k_20221103-be73be13.pth - Config: configs/efficientnet/efficientnet-l2_8xb8_in1k.py + Config: configs/efficientnet/efficientnet-l2_8xb8_in1k-800px.py Converted From: Weights: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/noisystudent/noisy_student_efficientnet-l2.tar.gz Code: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet diff --git a/configs/lenet/lenet5_mnist.py b/configs/lenet/lenet5_mnist.py index b74fc0ec71c..feef609c898 100644 --- a/configs/lenet/lenet5_mnist.py +++ b/configs/lenet/lenet5_mnist.py @@ -22,7 +22,6 @@ num_workers=2, dataset=dict(**common_data_cfg, test_mode=False), sampler=dict(type='DefaultSampler', shuffle=True), - persistent_workers=True, ) val_dataloader = dict( @@ -30,7 +29,6 @@ num_workers=2, dataset=dict(**common_data_cfg, test_mode=True), sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, ) val_evaluator = dict(type='Accuracy', topk=(1, )) diff --git a/docker/serve/Dockerfile b/docker/serve/Dockerfile index db0bf7081e7..fa0766e6412 100644 --- a/docker/serve/Dockerfile +++ b/docker/serve/Dockerfile @@ -7,9 +7,9 @@ FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub 32 RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub -ARG MMENGINE="0.2.0" +ARG MMENGINE="0.3.1" ARG MMCV="2.0.0rc1" -ARG MMCLS="1.0.0rc2" +ARG MMCLS="1.0.0rc3" ENV PYTHONUNBUFFERED TRUE diff --git a/docs/en/advanced_guides/runtime.md b/docs/en/advanced_guides/runtime.md index 4e75028c63a..8523a7f6d86 100644 --- a/docs/en/advanced_guides/runtime.md +++ b/docs/en/advanced_guides/runtime.md @@ -33,7 +33,7 @@ Here are some usual arguments, and all available arguments can be found in the [ - **`by_epoch`** (bool): Whether the **`interval`** is by epoch or by iteration. Defaults to `True`. - **`out_dir`** (str): The root directory to save checkpoints. If not specified, the checkpoints will be saved in the work directory. If specified, the checkpoints will be saved in the sub-folder of the **`out_dir`**. - **`max_keep_ckpts`** (int): The maximum checkpoints to keep. In some cases, we want only the latest few checkpoints and would like to delete old ones to save disk space. Defaults to -1, which means unlimited. -- **`save_best`** (str, List\[str\]): If specified, it will save the checkpoint with the best evaluation result. +- **`save_best`** (str, List[str]): If specified, it will save the checkpoint with the best evaluation result. Usually, you can simply use `save_best="auto"` to automatically select the evaluation metric. And if you want more advanced configuration, please refer to the [CheckpointHook docs](mmengine.hooks.CheckpointHook). diff --git a/docs/en/advanced_guides/schedule.md b/docs/en/advanced_guides/schedule.md index a9102fca00a..0b9a6a2253d 100644 --- a/docs/en/advanced_guides/schedule.md +++ b/docs/en/advanced_guides/schedule.md @@ -223,7 +223,7 @@ names of learning rate schedulers end with `LR`. ] ``` - Notice that, we use `begin` and `end` arguments here to assign the valid range, which is \[`begin`, `end`) for this schedule. And the range unit is defined by `by_epoch` argument. If not specified, the `begin` is 0 and the `end` is the max epochs or iterations. + Notice that, we use `begin` and `end` arguments here to assign the valid range, which is [`begin`, `end`) for this schedule. And the range unit is defined by `by_epoch` argument. If not specified, the `begin` is 0 and the `end` is the max epochs or iterations. If the ranges for all schedules are not continuous, the learning rate will stay constant in ignored range, otherwise all valid schedulers will be executed in order in a specific stage, which behaves the same as PyTorch [`ChainedScheduler`](torch.optim.lr_scheduler.ChainedScheduler). diff --git a/docs/en/api/data_process.rst b/docs/en/api/data_process.rst index 01ef3141da4..c81019f1c79 100644 --- a/docs/en/api/data_process.rst +++ b/docs/en/api/data_process.rst @@ -142,7 +142,7 @@ Formatting MMCV transforms ^^^^^^^^^^^^^^^ -We also provides many transforms in MMCV. You can use them directly in the config files. Here are some frequently used transforms, and the whole transforms list can be found in :external:mod:`mmcv.transforms`. +We also provides many transforms in MMCV. You can use them directly in the config files. Here are some frequently used transforms, and the whole transforms list can be found in :external+mmcv:doc:`api/transforms`. .. list-table:: :widths: 50 50 diff --git a/docs/en/notes/changelog.md b/docs/en/notes/changelog.md index 5d9bcb9e69f..3b07c892a5a 100644 --- a/docs/en/notes/changelog.md +++ b/docs/en/notes/changelog.md @@ -1,23 +1,81 @@ # Changelog +## v1.0.0rc3(21/11/2022) + +### Highlights + +- Add **Switch Recipe** Hook, Now we can modify training pipeline, mixup and loss settings during training, see [#1101](https://github.com/open-mmlab/mmclassification/pull/1101). +- Add **TIMM and HuggingFace** wrappers. Now you can train/use models in TIMM/HuggingFace directly, see [#1102](https://github.com/open-mmlab/mmclassification/pull/1102). +- Support **retrieval tasks**, see [#1055](https://github.com/open-mmlab/mmclassification/pull/1055). +- Reproduce **mobileone** training accuracy. See [#1191](https://github.com/open-mmlab/mmclassification/pull/1191) + +### New Features + +- Add checkpoints from EfficientNets NoisyStudent & L2. ([#1122](https://github.com/open-mmlab/mmclassification/pull/1122)) +- Migrate CSRA head to 1.x. ([#1177](https://github.com/open-mmlab/mmclassification/pull/1177)) +- Support RepLKnet backbone. ([#1129](https://github.com/open-mmlab/mmclassification/pull/1129)) +- Add Switch Recipe Hook. ([#1101](https://github.com/open-mmlab/mmclassification/pull/1101)) +- Add adan optimizer. ([#1180](https://github.com/open-mmlab/mmclassification/pull/1180)) +- Support DaViT. ([#1105](https://github.com/open-mmlab/mmclassification/pull/1105)) +- Support Activation Checkpointing for ConvNeXt. ([#1153](https://github.com/open-mmlab/mmclassification/pull/1153)) +- Add TIMM and HuggingFace wrappers to build classifiers from them directly. ([#1102](https://github.com/open-mmlab/mmclassification/pull/1102)) +- Add reduction for neck ([#978](https://github.com/open-mmlab/mmclassification/pull/978)) +- Support HorNet Backbone for dev1.x. ([#1094](https://github.com/open-mmlab/mmclassification/pull/1094)) +- Add arcface head. ([#926](https://github.com/open-mmlab/mmclassification/pull/926)) +- Add Base Retriever and Image2Image Retriever for retrieval tasks. ([#1055](https://github.com/open-mmlab/mmclassification/pull/1055)) +- Support MobileViT backbone. ([#1068](https://github.com/open-mmlab/mmclassification/pull/1068)) + +### Improvements + +- [Enhance] Enhance ArcFaceClsHead. ([#1181](https://github.com/open-mmlab/mmclassification/pull/1181)) +- [Refactor] Refactor to use new fileio API in MMEngine. ([#1176](https://github.com/open-mmlab/mmclassification/pull/1176)) +- [Enhance] Reproduce mobileone training accuracy. ([#1191](https://github.com/open-mmlab/mmclassification/pull/1191)) +- [Enhance] add deleting params info in swinv2. ([#1142](https://github.com/open-mmlab/mmclassification/pull/1142)) +- [Enhance] Add more mobilenetv3 pretrains. ([#1154](https://github.com/open-mmlab/mmclassification/pull/1154)) +- [Enhancement] RepVGG for YOLOX-PAI for dev-1.x. ([#1126](https://github.com/open-mmlab/mmclassification/pull/1126)) +- [Improve] Speed up data preprocessor. ([#1064](https://github.com/open-mmlab/mmclassification/pull/1064)) + +### Bug Fixes + +- Fix the torchserve. ([#1143](https://github.com/open-mmlab/mmclassification/pull/1143)) +- Fix configs due to api refactor of `num_classes`. ([#1184](https://github.com/open-mmlab/mmclassification/pull/1184)) +- Update mmcls2torchserve. ([#1189](https://github.com/open-mmlab/mmclassification/pull/1189)) +- Fix for `inference_model` cannot get classes information in checkpoint. ([#1093](https://github.com/open-mmlab/mmclassification/pull/1093)) + +### Docs Update + +- Add not-found page extension. ([#1207](https://github.com/open-mmlab/mmclassification/pull/1207)) +- update visualization doc. ([#1160](https://github.com/open-mmlab/mmclassification/pull/1160)) +- Support sort and search the Model Summary table. ([#1100](https://github.com/open-mmlab/mmclassification/pull/1100)) +- Improve the ResNet model page. ([#1118](https://github.com/open-mmlab/mmclassification/pull/1118)) +- update the readme of convnext. ([#1156](https://github.com/open-mmlab/mmclassification/pull/1156)) +- Fix the installation docs link in README. ([#1164](https://github.com/open-mmlab/mmclassification/pull/1164)) +- Improve ViT and MobileViT model pages. ([#1155](https://github.com/open-mmlab/mmclassification/pull/1155)) +- Improve Swin Doc and Add Tabs enxtation. ([#1145](https://github.com/open-mmlab/mmclassification/pull/1145)) +- Add MMEval projects link in README. ([#1162](https://github.com/open-mmlab/mmclassification/pull/1162)) +- Add runtime configuration docs. ([#1128](https://github.com/open-mmlab/mmclassification/pull/1128)) +- Add custom evaluation docs ([#1130](https://github.com/open-mmlab/mmclassification/pull/1130)) +- Add custom pipeline docs. ([#1124](https://github.com/open-mmlab/mmclassification/pull/1124)) +- Add MMYOLO projects link in MMCLS1.x. ([#1117](https://github.com/open-mmlab/mmclassification/pull/1117)) + ## v1.0.0rc2(12/10/2022) ### New Features -- \[Feature\] Support DeiT3. ([#1065](https://github.com/open-mmlab/mmclassification/pull/1065)) +- [Feature] Support DeiT3. ([#1065](https://github.com/open-mmlab/mmclassification/pull/1065)) ### Improvements -- \[Enhance\] Update `analyze_results.py` for dev-1.x. ([#1071](https://github.com/open-mmlab/mmclassification/pull/1071)) -- \[Enhance\] Get scores from inference api. ([#1070](https://github.com/open-mmlab/mmclassification/pull/1070)) +- [Enhance] Update `analyze_results.py` for dev-1.x. ([#1071](https://github.com/open-mmlab/mmclassification/pull/1071)) +- [Enhance] Get scores from inference api. ([#1070](https://github.com/open-mmlab/mmclassification/pull/1070)) ### Bug Fixes -- \[Fix\] Update requirements. ([#1083](https://github.com/open-mmlab/mmclassification/pull/1083)) +- [Fix] Update requirements. ([#1083](https://github.com/open-mmlab/mmclassification/pull/1083)) ### Docs Update -- \[Docs\] Add 1x docs schedule. ([#1015](https://github.com/open-mmlab/mmclassification/pull/1015)) +- [Docs] Add 1x docs schedule. ([#1015](https://github.com/open-mmlab/mmclassification/pull/1015)) ## v1.0.0rc1(30/9/2022) @@ -33,10 +91,10 @@ ### Improvements -- \[Refactor\] Fix visualization tools. ([#1045](https://github.com/open-mmlab/mmclassification/pull/1045)) -- \[Improve\] Update benchmark scripts ([#1028](https://github.com/open-mmlab/mmclassification/pull/1028)) -- \[Improve\] Update tools to enable `pin_memory` and `persistent_workers` by default. ([#1024](https://github.com/open-mmlab/mmclassification/pull/1024)) -- \[CI\] Update circle-ci and github workflow. ([#1018](https://github.com/open-mmlab/mmclassification/pull/1018)) +- [Refactor] Fix visualization tools. ([#1045](https://github.com/open-mmlab/mmclassification/pull/1045)) +- [Improve] Update benchmark scripts ([#1028](https://github.com/open-mmlab/mmclassification/pull/1028)) +- [Improve] Update tools to enable `pin_memory` and `persistent_workers` by default. ([#1024](https://github.com/open-mmlab/mmclassification/pull/1024)) +- [CI] Update circle-ci and github workflow. ([#1018](https://github.com/open-mmlab/mmclassification/pull/1018)) ### Bug Fixes @@ -95,13 +153,13 @@ And there are some BC-breaking changes. Please check [the migration tutorial](ht ### New Features -- \[Feature\] Support resize relative position embedding in `SwinTransformer`. ([#749](https://github.com/open-mmlab/mmclassification/pull/749)) -- \[Feature\] Add PoolFormer backbone and checkpoints. ([#746](https://github.com/open-mmlab/mmclassification/pull/746)) +- [Feature] Support resize relative position embedding in `SwinTransformer`. ([#749](https://github.com/open-mmlab/mmclassification/pull/749)) +- [Feature] Add PoolFormer backbone and checkpoints. ([#746](https://github.com/open-mmlab/mmclassification/pull/746)) ### Improvements -- \[Enhance\] Improve CPE performance by reduce memory copy. ([#762](https://github.com/open-mmlab/mmclassification/pull/762)) -- \[Enhance\] Add extra dataloader settings in configs. ([#752](https://github.com/open-mmlab/mmclassification/pull/752)) +- [Enhance] Improve CPE performance by reduce memory copy. ([#762](https://github.com/open-mmlab/mmclassification/pull/762)) +- [Enhance] Add extra dataloader settings in configs. ([#752](https://github.com/open-mmlab/mmclassification/pull/752)) ## v0.22.0(30/3/2022) @@ -113,29 +171,29 @@ And there are some BC-breaking changes. Please check [the migration tutorial](ht ### New Features -- \[Feature\] Add CSPNet and backbone and checkpoints ([#735](https://github.com/open-mmlab/mmclassification/pull/735)) -- \[Feature\] Add `CustomDataset`. ([#738](https://github.com/open-mmlab/mmclassification/pull/738)) -- \[Feature\] Add diff seeds to diff ranks. ([#744](https://github.com/open-mmlab/mmclassification/pull/744)) -- \[Feature\] Support ConvMixer. ([#716](https://github.com/open-mmlab/mmclassification/pull/716)) -- \[Feature\] Our `dist_train` & `dist_test` tools support distributed training on multiple machines. ([#734](https://github.com/open-mmlab/mmclassification/pull/734)) -- \[Feature\] Add RepMLP backbone and checkpoints. ([#709](https://github.com/open-mmlab/mmclassification/pull/709)) -- \[Feature\] Support CUB dataset. ([#703](https://github.com/open-mmlab/mmclassification/pull/703)) -- \[Feature\] Support ResizeMix. ([#676](https://github.com/open-mmlab/mmclassification/pull/676)) +- [Feature] Add CSPNet and backbone and checkpoints ([#735](https://github.com/open-mmlab/mmclassification/pull/735)) +- [Feature] Add `CustomDataset`. ([#738](https://github.com/open-mmlab/mmclassification/pull/738)) +- [Feature] Add diff seeds to diff ranks. ([#744](https://github.com/open-mmlab/mmclassification/pull/744)) +- [Feature] Support ConvMixer. ([#716](https://github.com/open-mmlab/mmclassification/pull/716)) +- [Feature] Our `dist_train` & `dist_test` tools support distributed training on multiple machines. ([#734](https://github.com/open-mmlab/mmclassification/pull/734)) +- [Feature] Add RepMLP backbone and checkpoints. ([#709](https://github.com/open-mmlab/mmclassification/pull/709)) +- [Feature] Support CUB dataset. ([#703](https://github.com/open-mmlab/mmclassification/pull/703)) +- [Feature] Support ResizeMix. ([#676](https://github.com/open-mmlab/mmclassification/pull/676)) ### Improvements -- \[Enhance\] Use `--a-b` instead of `--a_b` in arguments. ([#754](https://github.com/open-mmlab/mmclassification/pull/754)) -- \[Enhance\] Add `get_cat_ids` and `get_gt_labels` to KFoldDataset. ([#721](https://github.com/open-mmlab/mmclassification/pull/721)) -- \[Enhance\] Set torch seed in `worker_init_fn`. ([#733](https://github.com/open-mmlab/mmclassification/pull/733)) +- [Enhance] Use `--a-b` instead of `--a_b` in arguments. ([#754](https://github.com/open-mmlab/mmclassification/pull/754)) +- [Enhance] Add `get_cat_ids` and `get_gt_labels` to KFoldDataset. ([#721](https://github.com/open-mmlab/mmclassification/pull/721)) +- [Enhance] Set torch seed in `worker_init_fn`. ([#733](https://github.com/open-mmlab/mmclassification/pull/733)) ### Bug Fixes -- \[Fix\] Fix the discontiguous output feature map of ConvNeXt. ([#743](https://github.com/open-mmlab/mmclassification/pull/743)) +- [Fix] Fix the discontiguous output feature map of ConvNeXt. ([#743](https://github.com/open-mmlab/mmclassification/pull/743)) ### Docs Update -- \[Docs\] Add brief installation steps in README for copy&paste. ([#755](https://github.com/open-mmlab/mmclassification/pull/755)) -- \[Docs\] fix logo url link from mmocr to mmcls. ([#732](https://github.com/open-mmlab/mmclassification/pull/732)) +- [Docs] Add brief installation steps in README for copy&paste. ([#755](https://github.com/open-mmlab/mmclassification/pull/755)) +- [Docs] fix logo url link from mmocr to mmcls. ([#732](https://github.com/open-mmlab/mmclassification/pull/732)) ## v0.21.0(04/03/2022) @@ -238,18 +296,18 @@ And there are some BC-breaking changes. Please check [the migration tutorial](ht ### Improvements -- \[Reproduction\] Reproduce RegNetX training accuracy. ([#587](https://github.com/open-mmlab/mmclassification/pull/587)) -- \[Reproduction\] Reproduce training results of T2T-ViT. ([#610](https://github.com/open-mmlab/mmclassification/pull/610)) -- \[Enhance\] Provide high-acc training settings of ResNet. ([#572](https://github.com/open-mmlab/mmclassification/pull/572)) -- \[Enhance\] Set a random seed when the user does not set a seed. ([#554](https://github.com/open-mmlab/mmclassification/pull/554)) -- \[Enhance\] Added `NumClassCheckHook` and unit tests. ([#559](https://github.com/open-mmlab/mmclassification/pull/559)) -- \[Enhance\] Enhance feature extraction function. ([#593](https://github.com/open-mmlab/mmclassification/pull/593)) -- \[Enhance\] Improve efficiency of precision, recall, f1_score and support. ([#595](https://github.com/open-mmlab/mmclassification/pull/595)) -- \[Enhance\] Improve accuracy calculation performance. ([#592](https://github.com/open-mmlab/mmclassification/pull/592)) -- \[Refactor\] Refactor `analysis_log.py`. ([#529](https://github.com/open-mmlab/mmclassification/pull/529)) -- \[Refactor\] Use new API of matplotlib to handle blocking input in visualization. ([#568](https://github.com/open-mmlab/mmclassification/pull/568)) -- \[CI\] Cancel previous runs that are not completed. ([#583](https://github.com/open-mmlab/mmclassification/pull/583)) -- \[CI\] Skip build CI if only configs or docs modification. ([#575](https://github.com/open-mmlab/mmclassification/pull/575)) +- [Reproduction] Reproduce RegNetX training accuracy. ([#587](https://github.com/open-mmlab/mmclassification/pull/587)) +- [Reproduction] Reproduce training results of T2T-ViT. ([#610](https://github.com/open-mmlab/mmclassification/pull/610)) +- [Enhance] Provide high-acc training settings of ResNet. ([#572](https://github.com/open-mmlab/mmclassification/pull/572)) +- [Enhance] Set a random seed when the user does not set a seed. ([#554](https://github.com/open-mmlab/mmclassification/pull/554)) +- [Enhance] Added `NumClassCheckHook` and unit tests. ([#559](https://github.com/open-mmlab/mmclassification/pull/559)) +- [Enhance] Enhance feature extraction function. ([#593](https://github.com/open-mmlab/mmclassification/pull/593)) +- [Enhance] Improve efficiency of precision, recall, f1_score and support. ([#595](https://github.com/open-mmlab/mmclassification/pull/595)) +- [Enhance] Improve accuracy calculation performance. ([#592](https://github.com/open-mmlab/mmclassification/pull/592)) +- [Refactor] Refactor `analysis_log.py`. ([#529](https://github.com/open-mmlab/mmclassification/pull/529)) +- [Refactor] Use new API of matplotlib to handle blocking input in visualization. ([#568](https://github.com/open-mmlab/mmclassification/pull/568)) +- [CI] Cancel previous runs that are not completed. ([#583](https://github.com/open-mmlab/mmclassification/pull/583)) +- [CI] Skip build CI if only configs or docs modification. ([#575](https://github.com/open-mmlab/mmclassification/pull/575)) ### Bug Fixes diff --git a/docs/en/notes/faq.md b/docs/en/notes/faq.md index 285c778395a..b390b10e6ec 100644 --- a/docs/en/notes/faq.md +++ b/docs/en/notes/faq.md @@ -17,7 +17,7 @@ and make sure you fill in all required information in the template. | MMClassification version | MMCV version | | :----------------------: | :--------------------: | - | 1.0.0rc2 (1.x) | mmcv>=2.0.0rc1 | + | 1.0.0rc3 (1.x) | mmcv>=2.0.0rc1 | | 0.24.0 (master) | mmcv>=1.4.2, \<1.7.0 | | 0.23.1 | mmcv>=1.4.2, \<1.6.0 | | 0.22.1 | mmcv>=1.4.2, \<1.6.0 | diff --git a/docs/en/notes/projects.md b/docs/en/notes/projects.md index 393c27f9f7e..a896b0cadc6 100644 --- a/docs/en/notes/projects.md +++ b/docs/en/notes/projects.md @@ -17,5 +17,5 @@ Some of the papers are published in top-tier conferences (CVPR, ICCV, and ECCV), To make this list also a reference for the community to develop and compare new image classification algorithms, we list them following the time order of top-tier conferences. Methods already supported and maintained by MMClassification are not listed. -- Involution: Inverting the Inherence of Convolution for Visual Recognition, CVPR21. [\[paper\]](https://arxiv.org/abs/2103.06255)[\[github\]](https://github.com/d-li14/involution) -- Convolution of Convolution: Let Kernels Spatially Collaborate, CVPR22. [\[paper\]](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_Convolution_of_Convolution_Let_Kernels_Spatially_Collaborate_CVPR_2022_paper.pdf)[\[github\]](https://github.com/Genera1Z/ConvolutionOfConvolution) +- Involution: Inverting the Inherence of Convolution for Visual Recognition, CVPR21. [[paper]](https://arxiv.org/abs/2103.06255)[[github]](https://github.com/d-li14/involution) +- Convolution of Convolution: Let Kernels Spatially Collaborate, CVPR22. [[paper]](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_Convolution_of_Convolution_Let_Kernels_Spatially_Collaborate_CVPR_2022_paper.pdf)[[github]](https://github.com/Genera1Z/ConvolutionOfConvolution) diff --git a/docs/zh_CN/advanced_guides/schedule.md b/docs/zh_CN/advanced_guides/schedule.md index d112d76db0d..9fac31aae33 100644 --- a/docs/zh_CN/advanced_guides/schedule.md +++ b/docs/zh_CN/advanced_guides/schedule.md @@ -223,7 +223,7 @@ optim_wrapper = dict( ] ``` - 注意这里增加了 `begin` 和 `end` 参数,这两个参数指定了调度器的**生效区间**。生效区间通常只在多个调度器组合时才需要去设置,使用单个调度器时可以忽略。当指定了 `begin` 和 `end` 参数时,表示该调度器只在 \[begin, end) 区间内生效,其单位是由 `by_epoch` 参数决定。在组合不同调度器时,各调度器的 `by_epoch` 参数不必相同。如果没有指定的情况下,`begin` 为 0, `end` 为最大迭代轮次或者最大迭代次数。 + 注意这里增加了 `begin` 和 `end` 参数,这两个参数指定了调度器的**生效区间**。生效区间通常只在多个调度器组合时才需要去设置,使用单个调度器时可以忽略。当指定了 `begin` 和 `end` 参数时,表示该调度器只在 [begin, end) 区间内生效,其单位是由 `by_epoch` 参数决定。在组合不同调度器时,各调度器的 `by_epoch` 参数不必相同。如果没有指定的情况下,`begin` 为 0, `end` 为最大迭代轮次或者最大迭代次数。 如果相邻两个调度器的生效区间没有紧邻,而是有一段区间没有被覆盖,那么这段区间的学习率维持不变。而如果两个调度器的生效区间发生了重叠,则对多组调度器叠加使用,学习率的调整会按照调度器配置文件中的顺序触发(行为与 PyTorch 中 [`ChainedScheduler`](torch.optim.lr_scheduler.ChainedScheduler) 一致)。 diff --git a/docs/zh_CN/notes/faq.md b/docs/zh_CN/notes/faq.md index 195e9b77803..450214ae656 100644 --- a/docs/zh_CN/notes/faq.md +++ b/docs/zh_CN/notes/faq.md @@ -15,7 +15,7 @@ | MMClassification version | MMCV version | | :----------------------: | :--------------------: | - | 1.0.0rc2 (1.x) | mmcv>=2.0.0rc1 | + | 1.0.0rc3 (1.x) | mmcv>=2.0.0rc1 | | 0.24.0 (master) | mmcv>=1.4.2, \<1.7.0 | | 0.23.1 | mmcv>=1.4.2, \<1.6.0 | | 0.22.1 | mmcv>=1.4.2, \<1.6.0 | diff --git a/docs/zh_CN/user_guides/train_test.md b/docs/zh_CN/user_guides/train_test.md index 3bc238797e0..4380a8c41e8 100644 --- a/docs/zh_CN/user_guides/train_test.md +++ b/docs/zh_CN/user_guides/train_test.md @@ -28,7 +28,7 @@ CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [ARGS] | `--amp` | 启用混合精度训练。 | | `--no-validate` | **不建议** 在训练过程中不进行验证集上的精度验证。 | | `--auto-scale-lr` | 自动根据实际的批次大小(batch size)和预设的批次大小对学习率进行缩放。 | -| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="\[(a,b),(c,d)\]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 | +| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="[(a,b),(c,d)]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 | | `--launcher {none,pytorch,slurm,mpi}` | 启动器,默认为 "none"。 | ### 单机多卡训练 @@ -141,7 +141,7 @@ CUDA_VISIBLE_DEVICES=-1 python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [ | `--work-dir WORK_DIR` | 用来保存测试指标结果的文件夹。 | | `--out OUT` | 用来保存测试指标结果的文件。 | | `--dump DUMP` | 用来保存所有模型输出的文件,这些数据可以用于离线测评。 | -| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="\[(a,b),(c,d)\]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 | +| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="[(a,b),(c,d)]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 | | `--show-dir SHOW_DIR` | 用于保存可视化预测结果图像的文件夹。 | | `--show` | 在窗口中显示预测结果图像。 | | `--interval INTERVAL` | 每隔多少样本进行一次预测结果可视化。 | diff --git a/mmcls/models/backbones/hornet.py b/mmcls/models/backbones/hornet.py index e6d107045f5..7e563e2443a 100644 --- a/mmcls/models/backbones/hornet.py +++ b/mmcls/models/backbones/hornet.py @@ -250,7 +250,7 @@ def forward(self, x): @MODELS.register_module() class HorNet(BaseBackbone): - """HorNet. + """HorNet backbone. A PyTorch implementation of paper `HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions @@ -262,6 +262,7 @@ class HorNet(BaseBackbone): If use string, choose from 'tiny', 'small', 'base' and 'large'. If use dict, it should have below keys: + - **base_dim** (int): The base dimensions of embedding. - **depths** (List[int]): The number of blocks in each stage. - **orders** (List[int]): The number of order of gnConv in each @@ -273,7 +274,7 @@ class HorNet(BaseBackbone): drop_path_rate (float): Stochastic depth rate. Defaults to 0. scale (float): Scaling parameter of gflayer outputs. Defaults to 1/3. use_layer_scale (bool): Whether to use use_layer_scale in HorNet - block. Defaults to True. + block. Defaults to True. out_indices (Sequence[int]): Output from which stages. Default: ``(3, )``. frozen_stages (int): Stages to be frozen (stop grad and set eval mode). diff --git a/mmcls/models/backbones/repvgg.py b/mmcls/models/backbones/repvgg.py index 51a760bce59..8dd38e45cbf 100644 --- a/mmcls/models/backbones/repvgg.py +++ b/mmcls/models/backbones/repvgg.py @@ -309,43 +309,48 @@ class RepVGG(BaseBackbone): `_ Args: - arch (str | dict): RepVGG architecture. If use string, - choose from 'A0', 'A1`', 'A2', 'B0', 'B1', 'B1g2', 'B1g4', 'B2' - , 'B2g2', 'B2g4', 'B3', 'B3g2', 'B3g4' or 'D2se'. If use dict, - it should have below keys: - - num_blocks (Sequence[int]): Number of blocks in each stage. - - width_factor (Sequence[float]): Width deflator in each stage. - - group_layer_map (dict | None): RepVGG Block that declares + arch (str | dict): RepVGG architecture. If use string, choose from + 'A0', 'A1`', 'A2', 'B0', 'B1', 'B1g2', 'B1g4', 'B2', 'B2g2', + 'B2g4', 'B3', 'B3g2', 'B3g4' or 'D2se'. If use dict, it should + have below keys: + + - **num_blocks** (Sequence[int]): Number of blocks in each stage. + - **width_factor** (Sequence[float]): Width deflator in each stage. + - **group_layer_map** (dict | None): RepVGG Block that declares the need to apply group convolution. - - se_cfg (dict | None): Se Layer config. - - stem_channels (int, optional): The stem channels, the final - stem channels will be - ``min(stem_channels, base_channels*width_factor[0])``. - If not set here, 64 is used by default in the code. - in_channels (int): Number of input image channels. Default: 3. + - **se_cfg** (dict | None): SE Layer config. + - **stem_channels** (int, optional): The stem channels, the final + stem channels will be + ``min(stem_channels, base_channels*width_factor[0])``. + If not set here, 64 is used by default in the code. + + in_channels (int): Number of input image channels. Defaults to 3. base_channels (int): Base channels of RepVGG backbone, work with width_factor together. Defaults to 64. - out_indices (Sequence[int]): Output from which stages. Default: (3, ). + out_indices (Sequence[int]): Output from which stages. + Defaults to ``(3, )``. strides (Sequence[int]): Strides of the first block of each stage. - Default: (2, 2, 2, 2). + Defaults to ``(2, 2, 2, 2)``. dilations (Sequence[int]): Dilation of each stage. - Default: (1, 1, 1, 1). + Defaults to ``(1, 1, 1, 1)``. frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. Default: -1. - conv_cfg (dict | None): The config dict for conv layers. Default: None. + not freezing any parameters. Defaults to -1. + conv_cfg (dict | None): The config dict for conv layers. + Defaults to None. norm_cfg (dict): The config dict for norm layers. - Default: dict(type='BN'). + Defaults to ``dict(type='BN')``. act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). + Defaults to ``dict(type='ReLU')``. with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. + memory while slowing down the training speed. Defaults to False. deploy (bool): Whether to switch the model structure to deployment - mode. Default: False. + mode. Defaults to False. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - add_ppf (bool): Whether to use the MTSPPF block. Default: False. + and its variants only. Defaults to False. + add_ppf (bool): Whether to use the MTSPPF block. Defaults to False. init_cfg (dict or list[dict], optional): Initialization config dict. + Defaults to None. """ groupwise_layers = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26] diff --git a/mmcls/version.py b/mmcls/version.py index 68cc1a1f1c7..5e6347a5fed 100644 --- a/mmcls/version.py +++ b/mmcls/version.py @@ -1,6 +1,6 @@ # Copyright (c) OpenMMLab. All rights reserved -__version__ = '1.0.0rc2' +__version__ = '1.0.0rc3' def parse_version_info(version_str):