Skip to content

Commit

Permalink
Bump version to 1.0.0 (#1686)
Browse files Browse the repository at this point in the history
* bump version to 1.0.0

* update

* update

* fix lint

* update

* update

* update changelog

* update
  • Loading branch information
fangyixiao18 authored Jul 5, 2023
1 parent 0d80ab4 commit ae7a7b7
Show file tree
Hide file tree
Showing 13 changed files with 92 additions and 64 deletions.
10 changes: 0 additions & 10 deletions .dev_scripts/benchmark_options.py

This file was deleted.

2 changes: 1 addition & 1 deletion .dev_scripts/benchmark_regression/bench_test.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
- Name: convnext-base_32xb128_in1k
- Name: convnext-v2-atto_fcmae-pre_3rdparty_in1k
- Name: mobilenet-v2_8xb32_in1k
- Name: mobilenet-v3-small-050_8xb128_in1k
- Name: mobilenet-v3-small-050_3rdparty_in1k
- Name: swin-tiny_16xb64_in1k
- Name: swinv2-tiny-w8_3rdparty_in1k-256px
- Name: vit-base-p16_32xb128-mae_in1k
Expand Down
22 changes: 0 additions & 22 deletions .github/workflows/deploy.yml

This file was deleted.

25 changes: 14 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,28 +86,25 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351

## What's new

🌟 v1.0.0 was released in 04/07/2023

- Support inference of more **multi-modal** algorithms, such as [**LLaVA**](./configs/llava/), [**MiniGPT-4**](./configs/minigpt4), [**Otter**](./configs/otter/), etc.
- Support around **10 multi-modal** datasets!
- Add [**iTPN**](./configs/itpn/), [**SparK**](./configs/spark/) self-supervised learning algorithms.
- Provide examples of [New Config](./mmpretrain/configs/) and [DeepSpeed/FSDP with FlexibleRunner](./configs/mae/benchmarks/). Here are the documentation links of [New Config](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta) and [DeepSpeed/FSDP with FlexibleRunner](https://mmengine.readthedocs.io/en/latest/api/generated/mmengine.runner.FlexibleRunner.html#mmengine.runner.FlexibleRunner).

🌟 v1.0.0rc8 was released in 22/05/2023

- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms)

🌟 v1.0.0rc7 was released in 07/04/2023
Update of previous versions

- Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc.
- Support **RIFormer**, a simple but effective vision backbone by removing token mixer.
- Add t-SNE visualization.
- Refactor dataset pipeline visualization.

Update of previous versions

- Support **LeViT**, **XCiT**, **ViG**, **ConvNeXt-V2**, **EVA**, **RevViT**, **EfficientnetV2**, **CLIP**, **TinyViT** and **MixMIM** backbones.
- Reproduce the training accuracy of **ConvNeXt** and **RepVGG**.
- Support confusion matrix calculation and plot.
- Support **multi-task** training and testing.
- Support Test-time Augmentation.
- Upgrade API to get pre-defined models of MMPreTrain.
- Refactor BEiT backbone and support v1/v2 inference.

This release introduced a brand new and flexible training & test engine, but it's still in progress. Welcome
to try according to [the documentation](https://mmpretrain.readthedocs.io/en/latest/).
Expand Down Expand Up @@ -224,6 +221,10 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
<li><a href="configs/levit">LeViT</a></li>
<li><a href="configs/riformer">RIFormer</a></li>
<li><a href="configs/glip">GLIP</a></li>
<li><a href="configs/sam">ViT SAM</a></li>
<li><a href="configs/eva02">EVA02</a></li>
<li><a href="configs/dinov2">DINO V2</a></li>
<li><a href="configs/hivit">HiViT</a></li>
</ul>
</td>
<td>
Expand All @@ -246,6 +247,8 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
<li><a href="configs/beitv2">BEiT V2 (arXiv'2022)</a></li>
<li><a href="configs/eva">EVA (CVPR'2023)</a></li>
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
<li><a href="configs/itpn">iTPN (CVPR'2023)</a></li>
<li><a href="configs/spark">SparK (ICLR'2023)</a></li>
</ul>
</td>
<td>
Expand Down
25 changes: 14 additions & 11 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,28 +84,25 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351

## 更新日志

🌟 2023/7/4 发布了 v1.0.0 版本

- 支持更多**多模态**算法的推理, 例如 [**LLaVA**](./configs/llava/), [**MiniGPT-4**](./configs/minigpt4), [**Otter**](./configs/otter/) 等。
- 支持约 **10 个多模态**数据集!
- 添加自监督学习算法 [**iTPN**](./configs/itpn/), [**SparK**](./configs/spark/)
- 提供[新配置文件](./mmpretrain/configs/)[DeepSpeed/FSDP](./configs/mae/benchmarks/) 的样例。这是[新配置文件](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta)[DeepSpeed/FSDP with FlexibleRunner](https://mmengine.readthedocs.io/en/latest/api/generated/mmengine.runner.FlexibleRunner.html#mmengine.runner.FlexibleRunner) 的文档链接。

🌟 2023/5/22 发布了 v1.0.0rc8 版本

- 支持多种多模态算法和推理器。您可以通过 [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo) 探索这些功能!
- 新增 EVA-02,Dino-V2,ViT-SAM 和 GLIP 主干网络。
- 将 torchvision 变换注册到 MMPretrain,现在您可以轻松地将 torchvision 的数据增强集成到 MMPretrain 中。

🌟 2023/4/7 发布了 v1.0.0rc7 版本
之前版本更新内容

- 整和来自 MMSelfSup 的自监督学习算法,例如 `MAE`, `BEiT`
- 支持了 **RIFormer**,简单但有效的视觉主干网络,却移除了 token mixer
- 支持 t-SNE 可视化
- 重构数据管道可视化

之前版本更新内容

- 支持了 **LeViT**, **XCiT**, **ViG**, **ConvNeXt-V2**, **EVA**, **RevViT**, **EfficientnetV2**, **CLIP**, **TinyViT****MixMIM** 等骨干网络结构
- 复现了 ConvNeXt 和 RepVGG 的训练精度。
- 支持混淆矩阵计算和画图。
- 支持了 **多任务** 训练和测试。
- 支持了测试时增强(TTA)。
- 更新了主要 API 接口,用以方便地获取 MMPreTrain 中预定义的模型。
- 重构 BEiT 主干网络结构,并支持 v1 和 v2 模型的推理。

这个版本引入一个全新的,可扩展性强的训练和测试引擎,但目前仍在开发中。欢迎根据 [文档](https://mmpretrain.readthedocs.io/zh_CN/latest/) 进行试用。

Expand Down Expand Up @@ -220,6 +217,10 @@ mim install -e ".[multimodal]"
<li><a href="configs/levit">LeViT</a></li>
<li><a href="configs/riformer">RIFormer</a></li>
<li><a href="configs/glip">GLIP</a></li>
<li><a href="configs/sam">ViT SAM</a></li>
<li><a href="configs/eva02">EVA02</a></li>
<li><a href="configs/dinov2">DINO V2</a></li>
<li><a href="configs/hivit">HiViT</a></li>
</ul>
</td>
<td>
Expand All @@ -242,6 +243,8 @@ mim install -e ".[multimodal]"
<li><a href="configs/beitv2">BEiT V2 (arXiv'2022)</a></li>
<li><a href="configs/eva">EVA (CVPR'2023)</a></li>
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
<li><a href="configs/itpn">iTPN (CVPR'2023)</a></li>
<li><a href="configs/spark">SparK (ICLR'2023)</a></li>
</ul>
</td>
<td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
optim_wrapper = dict(type='DeepSpeedOptimWrapper')

# training strategy
# Deepspeed with ZeRO3 + fp16
strategy = dict(
type='DeepSpeedStrategy',
fp16=dict(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
optim_wrapper = dict(type='DeepSpeedOptimWrapper')

# training strategy
# Deepspeed with ZeRO3 + fp16
strategy = dict(
type='DeepSpeedStrategy',
fp16=dict(
Expand Down
54 changes: 54 additions & 0 deletions docs/en/notes/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,59 @@
# Changelog (MMPreTrain)

## v1.0.0(04/07/2023)

### Highlights

- Support inference of more **multi-modal** algorithms, such as **LLaVA**, **MiniGPT-4**, **Otter**, etc.
- Support around **10 multi-modal datasets**!
- Add **iTPN**, **SparK** self-supervised learning algorithms.
- Provide examples of [New Config](./mmpretrain/configs/) and [DeepSpeed/FSDP](./configs/mae/benchmarks/).

### New Features

- Transfer shape-bias tool from mmselfsup ([#1658](https://github.com/open-mmlab/mmpretrain/pull/1685))
- Download dataset by using MIM&OpenDataLab ([#1630](https://github.com/open-mmlab/mmpretrain/pull/1630))
- Support New Configs ([#1639](https://github.com/open-mmlab/mmpretrain/pull/1639), [#1647](https://github.com/open-mmlab/mmpretrain/pull/1647), [#1665](https://github.com/open-mmlab/mmpretrain/pull/1665))
- Support Flickr30k Retrieval dataset ([#1625](https://github.com/open-mmlab/mmpretrain/pull/1625))
- Support SparK ([#1531](https://github.com/open-mmlab/mmpretrain/pull/1531))
- Support LLaVA ([#1652](https://github.com/open-mmlab/mmpretrain/pull/1652))
- Support Otter ([#1651](https://github.com/open-mmlab/mmpretrain/pull/1651))
- Support MiniGPT-4 ([#1642](https://github.com/open-mmlab/mmpretrain/pull/1642))
- Add support for VizWiz dataset ([#1636](https://github.com/open-mmlab/mmpretrain/pull/1636))
- Add support for vsr dataset ([#1634](https://github.com/open-mmlab/mmpretrain/pull/1634))
- Add InternImage Classification project ([#1569](https://github.com/open-mmlab/mmpretrain/pull/1569))
- Support OCR-VQA dataset ([#1621](https://github.com/open-mmlab/mmpretrain/pull/1621))
- Support OK-VQA dataset ([#1615](https://github.com/open-mmlab/mmpretrain/pull/1615))
- Support TextVQA dataset ([#1569](https://github.com/open-mmlab/mmpretrain/pull/1569))
- Support iTPN and HiViT ([#1584](https://github.com/open-mmlab/mmpretrain/pull/1584))
- Add retrieval mAP metric ([#1552](https://github.com/open-mmlab/mmpretrain/pull/1552))
- Support NoCap dataset based on BLIP. ([#1582](https://github.com/open-mmlab/mmpretrain/pull/1582))
- Add GQA dataset ([#1585](https://github.com/open-mmlab/mmpretrain/pull/1585))

### Improvements

- Update fsdp vit-huge and vit-large config ([#1675](https://github.com/open-mmlab/mmpretrain/pull/1675))
- Support deepspeed with flexible runner ([#1673](https://github.com/open-mmlab/mmpretrain/pull/1673))
- Update Otter and LLaVA docs and config. ([#1653](https://github.com/open-mmlab/mmpretrain/pull/1653))
- Add image_only param of ScienceQA ([#1613](https://github.com/open-mmlab/mmpretrain/pull/1613))
- Support to use "split" to specify training set/validation ([#1535](https://github.com/open-mmlab/mmpretrain/pull/1535))

### Bug Fixes

- Refactor \_prepare_pos_embed in ViT ([#1656](https://github.com/open-mmlab/mmpretrain/pull/1656)[#1679](https://github.com/open-mmlab/mmpretrain/pull/1679))
- Freeze pre norm in vision transformer ([#1672](https://github.com/open-mmlab/mmpretrain/pull/1672))
- Fix bug loading IN1k dataset ([#1641](https://github.com/open-mmlab/mmpretrain/pull/1641))
- Fix sam bug ([#1633](https://github.com/open-mmlab/mmpretrain/pull/1633))
- Fixed circular import error for new transform ([#1609](https://github.com/open-mmlab/mmpretrain/pull/1609))
- Update torchvision transform wrapper ([#1595](https://github.com/open-mmlab/mmpretrain/pull/1595))
- Set default out_type in CAM visualization ([#1586](https://github.com/open-mmlab/mmpretrain/pull/1586))

### Docs Update

- Fix spelling ([#1681](https://github.com/open-mmlab/mmpretrain/pull/1681))
- Fix doc typos ([#1671](https://github.com/open-mmlab/mmpretrain/pull/1671), [#1644](https://github.com/open-mmlab/mmpretrain/pull/1644), [#1629](https://github.com/open-mmlab/mmpretrain/pull/1629))
- Add t-SNE visualization doc ([#1555](https://github.com/open-mmlab/mmpretrain/pull/1555))

## v1.0.0rc8(22/05/2023)

### Highlights
Expand Down
3 changes: 2 additions & 1 deletion docs/en/notes/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ and make sure you fill in all required information in the template.

| MMPretrain version | MMEngine version | MMCV version |
| :----------------: | :---------------: | :--------------: |
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0 (main) | mmengine >= 0.8.0 | mmcv >= 2.0.0 |
| 1.0.0rc8 | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |

```{note}
Expand Down
3 changes: 2 additions & 1 deletion docs/zh_CN/notes/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@

| MMPretrain 版本 | MMEngine 版本 | MMCV 版本 |
| :-------------: | :---------------: | :--------------: |
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0 (main) | mmengine >= 0.8.0 | mmcv >= 2.0.0 |
| 1.0.0rc8 | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |

```{note}
Expand Down
4 changes: 2 additions & 2 deletions mmpretrain/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
from .apis import * # noqa: F401, F403
from .version import __version__

mmcv_minimum_version = '2.0.0rc4'
mmcv_minimum_version = '2.0.0'
mmcv_maximum_version = '2.1.0'
mmcv_version = digit_version(mmcv.__version__)

mmengine_minimum_version = '0.7.3'
mmengine_minimum_version = '0.8.0'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)

Expand Down
2 changes: 1 addition & 1 deletion mmpretrain/version.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved

__version__ = '1.0.0rc8'
__version__ = '1.0.0'


def parse_version_info(version_str):
Expand Down
4 changes: 2 additions & 2 deletions requirements/mminstall.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
mmcv>=2.0.0rc4,<2.1.0
mmengine>=0.7.3,<1.0.0
mmcv>=2.0.0,<2.1.0
mmengine>=0.8.0,<1.0.0

0 comments on commit ae7a7b7

Please sign in to comment.