Skip to content

Commit

Permalink
Bump version to v1.0.0rc8 (#1583)
Browse files Browse the repository at this point in the history
* Bump version to v1.0.0rc8

* Apply suggestions from code review

Co-authored-by: Yixiao Fang <[email protected]>

* Update README.md

---------

Co-authored-by: Yixiao Fang <[email protected]>
  • Loading branch information
mzr1996 and fangyixiao18 authored May 23, 2023
1 parent be389eb commit 4dd8a86
Show file tree
Hide file tree
Showing 10 changed files with 94 additions and 9 deletions.
18 changes: 18 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351

## What's new

🌟 v1.0.0rc8 was released in 22/05/2023

- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms)

🌟 v1.0.0rc7 was released in 07/04/2023

- Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc.
Expand Down Expand Up @@ -160,6 +166,9 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
<td>
<b>Self-supervised Learning</b>
</td>
<td>
<b>Multi-Modality Algorithms</b>
</td>
<td>
<b>Others</b>
</td>
Expand Down Expand Up @@ -239,6 +248,15 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/blip">BLIP (arxiv'2022)</a></li>
<li><a href="configs/blip2">BLIP-2 (arxiv'2023)</a></li>
<li><a href="configs/ofa">OFA (CoRR'2022)</a></li>
<li><a href="configs/flamingo">Flamingo (NeurIPS'2022)</a></li>
<li><a href="configs/chinese_clip">Chinese CLIP (arxiv'2022)</a></li>
</ul>
</td>
<td>
Image Retrieval Task:
<ul>
Expand Down
18 changes: 18 additions & 0 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351

## 更新日志

🌟 2023/5/22 发布了 v1.0.0rc8 版本

- 支持多种多模态算法和推理器。您可以通过 [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo) 探索这些功能!
- 新增 EVA-02,Dino-V2,ViT-SAM 和 GLIP 主干网络。
- 将 torchvision 变换注册到 MMPretrain,现在您可以轻松地将 torchvision 的数据增强集成到 MMPretrain 中。

🌟 2023/4/7 发布了 v1.0.0rc7 版本

- 整和来自 MMSelfSup 的自监督学习算法,例如 `MAE`, `BEiT`
Expand Down Expand Up @@ -157,6 +163,9 @@ mim install -e ".[multimodal]"
<td>
<b>自监督学习</b>
</td>
<td>
<b>多模态算法</b>
</td>
<td>
<b>其它</b>
</td>
Expand Down Expand Up @@ -235,6 +244,15 @@ mim install -e ".[multimodal]"
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/blip">BLIP (arxiv'2022)</a></li>
<li><a href="configs/blip2">BLIP-2 (arxiv'2023)</a></li>
<li><a href="configs/ofa">OFA (CoRR'2022)</a></li>
<li><a href="configs/flamingo">Flamingo (NeurIPS'2022)</a></li>
<li><a href="configs/chinese_clip">Chinese CLIP (arxiv'2022)</a></li>
</ul>
</td>
<td>
图像检索任务:
<ul>
Expand Down
2 changes: 1 addition & 1 deletion docker/serve/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ ARG CUDA="11.3"
ARG CUDNN="8"
FROM pytorch/torchserve:latest-gpu

ARG MMPRE="1.0.0rc5"
ARG MMPRE="1.0.0rc8"

ENV PYTHONUNBUFFERED TRUE

Expand Down
4 changes: 2 additions & 2 deletions docs/en/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ pip install -U openmim && mim install -e .
Just install with mim.

```shell
pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
```

```{note}
Expand All @@ -80,7 +80,7 @@ can add `[multimodal]` during the installation. For example:
mim install -e ".[multimodal]"

# Install as a Python package
mim install "mmpretrain[multimodal]>=1.0.0rc7"
mim install "mmpretrain[multimodal]>=1.0.0rc8"
```

## Verify the installation
Expand Down
47 changes: 47 additions & 0 deletions docs/en/notes/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,52 @@
# Changelog (MMPreTrain)

## v1.0.0rc8(22/05/2023)

### Highlights

- Support multiple multi-modal algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain.

### New Features

- Support Chinese CLIP. ([#1576](https://github.com/open-mmlab/mmpretrain/pull/1576))
- Add ScienceQA Metrics ([#1577](https://github.com/open-mmlab/mmpretrain/pull/1577))
- Support multiple multi-modal algorithms and inferencers. ([#1561](https://github.com/open-mmlab/mmpretrain/pull/1561))
- add eva02 backbone ([#1450](https://github.com/open-mmlab/mmpretrain/pull/1450))
- Support dinov2 backbone ([#1522](https://github.com/open-mmlab/mmpretrain/pull/1522))
- Support some downstream classification datasets. ([#1467](https://github.com/open-mmlab/mmpretrain/pull/1467))
- Support GLIP ([#1308](https://github.com/open-mmlab/mmpretrain/pull/1308))
- Register torchvision transforms into mmpretrain ([#1265](https://github.com/open-mmlab/mmpretrain/pull/1265))
- Add ViT of SAM ([#1476](https://github.com/open-mmlab/mmpretrain/pull/1476))

### Improvements

- [Refactor] Support to freeze channel reduction and add layer decay function ([#1490](https://github.com/open-mmlab/mmpretrain/pull/1490))
- [Refactor] Support resizing pos_embed while loading ckpt and format output ([#1488](https://github.com/open-mmlab/mmpretrain/pull/1488))

### Bug Fixes

- Fix scienceqa ([#1581](https://github.com/open-mmlab/mmpretrain/pull/1581))
- Fix config of beit ([#1528](https://github.com/open-mmlab/mmpretrain/pull/1528))
- Incorrect stage freeze on RIFormer Model ([#1573](https://github.com/open-mmlab/mmpretrain/pull/1573))
- Fix ddp bugs caused by `out_type`. ([#1570](https://github.com/open-mmlab/mmpretrain/pull/1570))
- Fix multi-task-head loss potential bug ([#1530](https://github.com/open-mmlab/mmpretrain/pull/1530))
- Support bce loss without batch augmentations ([#1525](https://github.com/open-mmlab/mmpretrain/pull/1525))
- Fix clip generator init bug ([#1518](https://github.com/open-mmlab/mmpretrain/pull/1518))
- Fix the bug in binary cross entropy loss ([#1499](https://github.com/open-mmlab/mmpretrain/pull/1499))

### Docs Update

- Update PoolFormer citation to CVPR version ([#1505](https://github.com/open-mmlab/mmpretrain/pull/1505))
- Refine Inference Doc ([#1489](https://github.com/open-mmlab/mmpretrain/pull/1489))
- Add doc for usage of confusion matrix ([#1513](https://github.com/open-mmlab/mmpretrain/pull/1513))
- Update MMagic link ([#1517](https://github.com/open-mmlab/mmpretrain/pull/1517))
- Fix example_project README ([#1575](https://github.com/open-mmlab/mmpretrain/pull/1575))
- Add NPU support page ([#1481](https://github.com/open-mmlab/mmpretrain/pull/1481))
- train cfg: Removed old description ([#1473](https://github.com/open-mmlab/mmpretrain/pull/1473))
- Fix typo in MultiLabelDataset docstring ([#1483](https://github.com/open-mmlab/mmpretrain/pull/1483))

## v1.0.0rc7(07/04/2023)

### Highlights
Expand Down
3 changes: 2 additions & 1 deletion docs/en/notes/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ and make sure you fill in all required information in the template.

| MMPretrain version | MMEngine version | MMCV version |
| :----------------: | :---------------: | :--------------: |
| 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |

```{note}
Since the `dev` branch is under frequent development, the MMEngine and MMCV
Expand Down
4 changes: 2 additions & 2 deletions docs/zh_CN/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ pip install -U openmim && mim install -e .
直接使用 mim 安装即可。

```shell
pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
```

```{note}
Expand All @@ -83,7 +83,7 @@ MMPretrain 中的多模态模型需要额外的依赖项,要安装这些依赖
mim install -e ".[multimodal]"

# 作为 Python 包安装
mim install "mmpretrain[multimodal]>=1.0.0rc7"
mim install "mmpretrain[multimodal]>=1.0.0rc8"
```

## 验证安装
Expand Down
3 changes: 2 additions & 1 deletion docs/zh_CN/notes/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@

| MMPretrain 版本 | MMEngine 版本 | MMCV 版本 |
| :-------------: | :---------------: | :--------------: |
| 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |

```{note}
由于 `dev` 分支处于频繁开发中,MMEngine 和 MMCV 版本依赖可能不准确。如果您在使用
Expand Down
2 changes: 1 addition & 1 deletion mmpretrain/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
mmcv_maximum_version = '2.1.0'
mmcv_version = digit_version(mmcv.__version__)

mmengine_minimum_version = '0.5.0'
mmengine_minimum_version = '0.7.1'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)

Expand Down
2 changes: 1 addition & 1 deletion mmpretrain/version.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved

__version__ = '1.0.0rc7'
__version__ = '1.0.0rc8'


def parse_version_info(version_str):
Expand Down

0 comments on commit 4dd8a86

Please sign in to comment.