Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
KainingYing committed Jun 25, 2024
1 parent 285292e commit e700458
Show file tree
Hide file tree
Showing 153 changed files with 14,676 additions and 410 deletions.
15 changes: 13 additions & 2 deletions Quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,20 @@

## Prepare the Dataset

You can download the MMT-Bench dataset in the following links: [HuggingFace](https://huggingface.co/datasets/Kaining/MMT-Bench/blob/main/MMT-Bench_VAL.tsv). **Note**: We only provide `VAL` split now. And we will support `TEST` split in the furture.
VLMEvalKit now supports MMT-Bench, and the built-in functions will automatically download when you first use them.
You can also download the MMT-Bench dataset in the following links: [HuggingFace](https://huggingface.co/datasets/OpenGVLab/MMT-Bench), [ModelScope](https://modelscope.cn/datasets/OpenGVLab/MMT-Bench).

Put the data under the `LMUData/`
We have provided four dataset files:

- MMT-Bench_VAL: Used for local model evaluation (10% of the samples), where multiple images in multi-image samples are combined into a single image.
- MMT-Bench_VAL_MI: Used for local model evaluation (10% of the samples), but multi-image samples are stored as separate images.
- MMT-Bench_ALL: The FULL set (100% of the samples) evaluated on [this server](https://eval.ai/web/challenges/challenge-page/2328/overview), where multiple images in multi-image samples are combined into a single image.
- MMT-Bench_ALL_MI: Also the FULL set (100% of the samples) evaluated on [this server](https://eval.ai/web/challenges/challenge-page/2328/overview), but multi-image samples are stored as separate images.

***Note**: "MI" indicates that multi-image tasks are preserved in their original format, without "MI" indicating that multi-images are combined into a single image for evaluation. The evaluation of single-image tasks remains the same in both cases. We recommend that LVLM models capable of handling multi-image inputs use the MI files (MMT-Bench_VAL_MI, MMT-Bench_ALL_MI) for evaluation, while those not supporting multi-image inputs should use the combined version (MMT-Bench_VAL, MMT-Bench_ALL) for testing.*


Put the data under the `~/LMUData/`

## Step 0. Installation & Setup essential keys

Expand Down
33 changes: 14 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,33 @@
# Best Practice

Opencompass [VLMEevalKit](https://github.com/open-compass/VLMEvalKit) supports MMT-Bench now! **We strongly recommend using [VLMEevalKit](https://github.com/open-compass/VLMEvalKit) for its useful features and ready-to-use LVLM implementations**.

# MMT-Bench

<p align="left">
<a href="#🚀-quick-start"><b>Quick Start</b></a> |
<a href="https://mmt-bench.github.io/"><b>HomePage</b></a> |
<a href="https://arxiv.org/abs/2404.16006"><b>arXiv</b></a> |
<a href="https://huggingface.co/datasets/Kaining/MMT-Bench"><b>Dataset</b></a> |
<a href="https://huggingface.co/datasets/OpenGVLab/MMT-Bench"><b>Dataset</b></a> |
<a href="#🖊️-citation"><b>Citation</b></a> <br>

</p>


This repository is the official implementation of [MMT-Bench](https://arxiv.org/abs/2404.16006).

> [MMT-Bench: A Multimodal MultiTask Benchmark for Comprehensive Evaluation of Large Vision-Language Models](https://arxiv.org/abs/2404.16006)
> Kaining Ying<sup>\*</sup>, Fanqing Meng<sup>\*</sup>, Jin Wang<sup>\*</sup>, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, jiayi lei, Quanfeng Lu, Peng Gao, Runjian Chen, Peng Xu, Renrui Zhang, Haozhe Zhang, Yali Wang, Yu Qiao, Ping Luo, Kaipeng Zhang<sup>\#</sup>, Wenqi Shao<sup>\#</sup>
> <sup>\*</sup> KY, FM and JW contribute equally.
> <sup>\#</sup> WS ([email protected]) and KZ ([email protected]) are correponding authors.
## 💡 News

- `2024/04/24`: The technical report of [MMT-Bench](https://arxiv.org/abs/2404.16006) is released! And check our [project page](https://mmt-bench.github.io/)!
- `2024/04/26`: We release the evaluation code and the `VAL` split.
- `2024/05/01`: MMT-Bench is accepted by ICML 2024. See you in Vienna! 🇦🇹🇦🇹🇦🇹
- `2024/06/17`: Opencompass [VLMEevalKit](https://github.com/open-compass/VLMEvalKit) supports MMT-Bench now! **We strongly recommend using [VLMEevalKit](https://github.com/open-compass/VLMEvalKit) for its useful features and ready-to-use LVLM implementations**.
- `2024/06/25`: The evaluation of `ALL` split is host on the [EvalAI](https://eval.ai/web/challenges/challenge-page/2328/overview).
- `2024/06/25`: We release the `ALL` split and `VAL` split.

## Introduction
MMT-Bench is a comprehensive benchmark designed to assess LVLMs across massive multimodal tasks requiring expert knowledge and deliberate visual recognition, localization, reasoning, and planning. MMT-Bench comprises 31, 325 meticulously curated multi-choice visual questions from various multimodal scenarios such as vehicle driving and embodied navigation, covering 32 core meta-tasks and 162 subtasks in multimodal understanding.
![overview](assets/overview.jpg)
Expand Down Expand Up @@ -71,22 +82,6 @@ MMT-Bench is a comprehensive benchmark designed to assess LVLMs across massive m
| 34 | Frequency Guess | 31.7 |
| 35 | Random Guess | 28.5 |

### VAL Split

Coming soon.

### TEST Split

Coming soon.



## 💡 News

- `2024/04/24`: The technical report of [MMT-Bench](https://arxiv.org/abs/2404.16006) is released! And check our [project page](https://mmt-bench.github.io/)!
- `2024/04/26`: We release the evaluation code and the `VAL` split.



## 🚀 Quick Start

Expand Down
23 changes: 23 additions & 0 deletions VLMEvalKit-main/.github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: lint

on: [push, pull_request]

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.7
uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Install pre-commit hook
run: |
pip install pre-commit
pre-commit install
- name: Linting
run: pre-commit run --all-files
158 changes: 158 additions & 0 deletions VLMEvalKit-main/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# Images
images/

scripts/*ttf
31 changes: 31 additions & 0 deletions VLMEvalKit-main/.pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
exclude: |
(?x)^(
scripts/|
assets/|
vlmeval/config.py
)
repos:
- repo: https://github.com/PyCQA/flake8
rev: 5.0.4
hooks:
- id: flake8
args: ["--max-line-length=120", "--ignore=F401,F403,F405,E402,E722,E741,W503"]
exclude: ^configs/
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.30.0
hooks:
- id: yapf
args: ["--style={column_limit=120}"]
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.1.0
hooks:
- id: trailing-whitespace
- id: check-yaml
- id: end-of-file-fixer
- id: requirements-txt-fixer
- id: double-quote-string-fixer
- id: check-merge-conflict
- id: fix-encoding-pragma
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
Loading

0 comments on commit e700458

Please sign in to comment.