From 4dd8a861456a88966f1672060c1f7b24f05ec363 Mon Sep 17 00:00:00 2001 From: Ma Zerun Date: Tue, 23 May 2023 11:22:51 +0800 Subject: [PATCH] Bump version to v1.0.0rc8 (#1583) * Bump version to v1.0.0rc8 * Apply suggestions from code review Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> * Update README.md --------- Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> --- README.md | 18 +++++++++++++++ README_zh-CN.md | 18 +++++++++++++++ docker/serve/Dockerfile | 2 +- docs/en/get_started.md | 4 ++-- docs/en/notes/changelog.md | 47 ++++++++++++++++++++++++++++++++++++++ docs/en/notes/faq.md | 3 ++- docs/zh_CN/get_started.md | 4 ++-- docs/zh_CN/notes/faq.md | 3 ++- mmpretrain/__init__.py | 2 +- mmpretrain/version.py | 2 +- 10 files changed, 94 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index e6a0afbe21d..9d9494345a4 100644 --- a/README.md +++ b/README.md @@ -86,6 +86,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351 ## What's new +🌟 v1.0.0rc8 was released in 22/05/2023 + +- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)! +- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones. +- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms) + 🌟 v1.0.0rc7 was released in 07/04/2023 - Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc. @@ -160,6 +166,9 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo Self-supervised Learning + + Multi-Modality Algorithms + Others @@ -239,6 +248,15 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
  • MixMIM (arXiv'2022)
  • + + + Image Retrieval Task: + + + 图像检索任务: