diff --git a/README.md b/README.md
index e6a0afbe21d..9d9494345a4 100644
--- a/README.md
+++ b/README.md
@@ -86,6 +86,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351
## What's new
+🌟 v1.0.0rc8 was released in 22/05/2023
+
+- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
+- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
+- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms)
+
🌟 v1.0.0rc7 was released in 07/04/2023
- Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc.
@@ -160,6 +166,9 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
Self-supervised Learning
|
+
+ Multi-Modality Algorithms
+ |
Others
|
@@ -239,6 +248,15 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
MixMIM (arXiv'2022)
+
+
+ |
Image Retrieval Task:
diff --git a/README_zh-CN.md b/README_zh-CN.md
index 50426aca2a2..ba1e5ff2a39 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -84,6 +84,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351
## 更新日志
+🌟 2023/5/22 发布了 v1.0.0rc8 版本
+
+- 支持多种多模态算法和推理器。您可以通过 [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo) 探索这些功能!
+- 新增 EVA-02,Dino-V2,ViT-SAM 和 GLIP 主干网络。
+- 将 torchvision 变换注册到 MMPretrain,现在您可以轻松地将 torchvision 的数据增强集成到 MMPretrain 中。
+
🌟 2023/4/7 发布了 v1.0.0rc7 版本
- 整和来自 MMSelfSup 的自监督学习算法,例如 `MAE`, `BEiT` 等
@@ -157,6 +163,9 @@ mim install -e ".[multimodal]"
自监督学习
|
+
+ 多模态算法
+ |
其它
|
@@ -235,6 +244,15 @@ mim install -e ".[multimodal]"
- MixMIM (arXiv'2022)
|
+
+
+ |
图像检索任务:
diff --git a/docker/serve/Dockerfile b/docker/serve/Dockerfile
index 722a324f268..77d485bd400 100644
--- a/docker/serve/Dockerfile
+++ b/docker/serve/Dockerfile
@@ -3,7 +3,7 @@ ARG CUDA="11.3"
ARG CUDNN="8"
FROM pytorch/torchserve:latest-gpu
-ARG MMPRE="1.0.0rc5"
+ARG MMPRE="1.0.0rc8"
ENV PYTHONUNBUFFERED TRUE
diff --git a/docs/en/get_started.md b/docs/en/get_started.md
index 51821cfc5ca..5d33ac00969 100644
--- a/docs/en/get_started.md
+++ b/docs/en/get_started.md
@@ -63,7 +63,7 @@ pip install -U openmim && mim install -e .
Just install with mim.
```shell
-pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
+pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
```
```{note}
@@ -80,7 +80,7 @@ can add `[multimodal]` during the installation. For example:
mim install -e ".[multimodal]"
# Install as a Python package
-mim install "mmpretrain[multimodal]>=1.0.0rc7"
+mim install "mmpretrain[multimodal]>=1.0.0rc8"
```
## Verify the installation
diff --git a/docs/en/notes/changelog.md b/docs/en/notes/changelog.md
index ddfbde1e942..de68e1d8610 100644
--- a/docs/en/notes/changelog.md
+++ b/docs/en/notes/changelog.md
@@ -1,5 +1,52 @@
# Changelog (MMPreTrain)
+## v1.0.0rc8(22/05/2023)
+
+### Highlights
+
+- Support multiple multi-modal algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
+- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
+- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain.
+
+### New Features
+
+- Support Chinese CLIP. ([#1576](https://github.com/open-mmlab/mmpretrain/pull/1576))
+- Add ScienceQA Metrics ([#1577](https://github.com/open-mmlab/mmpretrain/pull/1577))
+- Support multiple multi-modal algorithms and inferencers. ([#1561](https://github.com/open-mmlab/mmpretrain/pull/1561))
+- add eva02 backbone ([#1450](https://github.com/open-mmlab/mmpretrain/pull/1450))
+- Support dinov2 backbone ([#1522](https://github.com/open-mmlab/mmpretrain/pull/1522))
+- Support some downstream classification datasets. ([#1467](https://github.com/open-mmlab/mmpretrain/pull/1467))
+- Support GLIP ([#1308](https://github.com/open-mmlab/mmpretrain/pull/1308))
+- Register torchvision transforms into mmpretrain ([#1265](https://github.com/open-mmlab/mmpretrain/pull/1265))
+- Add ViT of SAM ([#1476](https://github.com/open-mmlab/mmpretrain/pull/1476))
+
+### Improvements
+
+- [Refactor] Support to freeze channel reduction and add layer decay function ([#1490](https://github.com/open-mmlab/mmpretrain/pull/1490))
+- [Refactor] Support resizing pos_embed while loading ckpt and format output ([#1488](https://github.com/open-mmlab/mmpretrain/pull/1488))
+
+### Bug Fixes
+
+- Fix scienceqa ([#1581](https://github.com/open-mmlab/mmpretrain/pull/1581))
+- Fix config of beit ([#1528](https://github.com/open-mmlab/mmpretrain/pull/1528))
+- Incorrect stage freeze on RIFormer Model ([#1573](https://github.com/open-mmlab/mmpretrain/pull/1573))
+- Fix ddp bugs caused by `out_type`. ([#1570](https://github.com/open-mmlab/mmpretrain/pull/1570))
+- Fix multi-task-head loss potential bug ([#1530](https://github.com/open-mmlab/mmpretrain/pull/1530))
+- Support bce loss without batch augmentations ([#1525](https://github.com/open-mmlab/mmpretrain/pull/1525))
+- Fix clip generator init bug ([#1518](https://github.com/open-mmlab/mmpretrain/pull/1518))
+- Fix the bug in binary cross entropy loss ([#1499](https://github.com/open-mmlab/mmpretrain/pull/1499))
+
+### Docs Update
+
+- Update PoolFormer citation to CVPR version ([#1505](https://github.com/open-mmlab/mmpretrain/pull/1505))
+- Refine Inference Doc ([#1489](https://github.com/open-mmlab/mmpretrain/pull/1489))
+- Add doc for usage of confusion matrix ([#1513](https://github.com/open-mmlab/mmpretrain/pull/1513))
+- Update MMagic link ([#1517](https://github.com/open-mmlab/mmpretrain/pull/1517))
+- Fix example_project README ([#1575](https://github.com/open-mmlab/mmpretrain/pull/1575))
+- Add NPU support page ([#1481](https://github.com/open-mmlab/mmpretrain/pull/1481))
+- train cfg: Removed old description ([#1473](https://github.com/open-mmlab/mmpretrain/pull/1473))
+- Fix typo in MultiLabelDataset docstring ([#1483](https://github.com/open-mmlab/mmpretrain/pull/1483))
+
## v1.0.0rc7(07/04/2023)
### Highlights
diff --git a/docs/en/notes/faq.md b/docs/en/notes/faq.md
index 5322e4ee33a..12566016f7d 100644
--- a/docs/en/notes/faq.md
+++ b/docs/en/notes/faq.md
@@ -16,7 +16,8 @@ and make sure you fill in all required information in the template.
| MMPretrain version | MMEngine version | MMCV version |
| :----------------: | :---------------: | :--------------: |
- | 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
+ | 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
+ | 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
```{note}
Since the `dev` branch is under frequent development, the MMEngine and MMCV
diff --git a/docs/zh_CN/get_started.md b/docs/zh_CN/get_started.md
index c2100815aed..0cf252f1f4f 100644
--- a/docs/zh_CN/get_started.md
+++ b/docs/zh_CN/get_started.md
@@ -67,7 +67,7 @@ pip install -U openmim && mim install -e .
直接使用 mim 安装即可。
```shell
-pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
+pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
```
```{note}
@@ -83,7 +83,7 @@ MMPretrain 中的多模态模型需要额外的依赖项,要安装这些依赖
mim install -e ".[multimodal]"
# 作为 Python 包安装
-mim install "mmpretrain[multimodal]>=1.0.0rc7"
+mim install "mmpretrain[multimodal]>=1.0.0rc8"
```
## 验证安装
diff --git a/docs/zh_CN/notes/faq.md b/docs/zh_CN/notes/faq.md
index 224228b60bc..744cd3fcbf0 100644
--- a/docs/zh_CN/notes/faq.md
+++ b/docs/zh_CN/notes/faq.md
@@ -13,7 +13,8 @@
| MMPretrain 版本 | MMEngine 版本 | MMCV 版本 |
| :-------------: | :---------------: | :--------------: |
- | 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
+ | 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
+ | 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
```{note}
由于 `dev` 分支处于频繁开发中,MMEngine 和 MMCV 版本依赖可能不准确。如果您在使用
diff --git a/mmpretrain/__init__.py b/mmpretrain/__init__.py
index 7c99b333254..6262d2c67b9 100644
--- a/mmpretrain/__init__.py
+++ b/mmpretrain/__init__.py
@@ -10,7 +10,7 @@
mmcv_maximum_version = '2.1.0'
mmcv_version = digit_version(mmcv.__version__)
-mmengine_minimum_version = '0.5.0'
+mmengine_minimum_version = '0.7.1'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)
diff --git a/mmpretrain/version.py b/mmpretrain/version.py
index 1816e81d740..1d684c9c1ab 100644
--- a/mmpretrain/version.py
+++ b/mmpretrain/version.py
@@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved
-__version__ = '1.0.0rc7'
+__version__ = '1.0.0rc8'
def parse_version_info(version_str):
|