diff --git a/README.md b/README.md index e6a0afbe21d..9d9494345a4 100644 --- a/README.md +++ b/README.md @@ -86,6 +86,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351 ## What's new +🌟 v1.0.0rc8 was released in 22/05/2023 + +- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)! +- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones. +- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms) + 🌟 v1.0.0rc7 was released in 07/04/2023 - Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc. @@ -160,6 +166,9 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo Self-supervised Learning + + Multi-Modality Algorithms + Others @@ -239,6 +248,15 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
  • MixMIM (arXiv'2022)
  • + + + Image Retrieval Task: + + + 图像检索任务: