SHREC 2022 Track: Sketch-Based 3D Shape Retrieval in the Wild
Organizers: Jie Qin, Shuaihang Yuan, Jiaxin Chen, Boulbaba Ben Amor, Yi Fang.
- 09/03/2022: We have announced the final results.
- 28/02/2022: We have released the test sets at [Google Drive].
- 16/02/2022: We have released the evaluation code at [Google Drive] as well as the test protocol (see below).
- 05/02/2022: We have released the training set for the second task at [Google Drive].
- 29/01/2022: We have released the training set for the first task at [Google Drive].
- 15/01/2022: A few sample sketches (100 per category) and 3D models (10 per category) are released at [Google Drive] [Baidu Netdisk].
Visit our google-drive folder for all data:
# Device
Tesla V100 GPU, CUDA 10.2
# Key Libs
Python 3.7.11, PyTorch 1.7.1, PyTorch3d 0.4.0
# Set up Conda virtual environment
conda env create -f environment.yml
Download all the files from our google-drive and put them into SBSRW/dataset
.
Pretrained-backbone is not used in baseline-mv.
cd mv
./shrec22_script/train_mv_cad.sh
./shrec22_script/train_mv_wild.sh
Download the pretrained backbone weights here and put them into point/checkpoint.
cd point
./shrec22_script/train_pc_cad.sh
./shrec22_script/train_pc_wild.sh
Download the well-trained weights of two methods for two tasks here.
cd mv
./shrec22_script/test_mv_cad.sh
./shrec22_script/test_mv_wild.sh
cd point
./shrec22_script/test_pc_cad.sh
./shrec22_script/test_pc_wild.sh
Run the online Colab-evaluation for evaluation on existing distance matrices.
-
If you have no new test results (i.e., distance matrices), please directly run the codes in Colab-plot_PR_results.
-
If you have newly generated results (i.e., distance matrices), please follow the steps below to perform evaluation:
1). Follow the test part to produce your distance matrices.
2). Upload your distance matrices (.npy files) to our google-drive folder asteam_5_TMP/submission/Task 1/task1.npy
andteam_5_TMP/submission/Task 2/task2.npy
or similar formats, and add these two paths into distM_filenames variable.
3). Run Colab-plot_PR_results to see and save the figure of plots.
4). Setonly_best = False
to generate Fig. 12 (settask = 1
for Fig. 12 (a) and settask = 2
for Fig. 12 (b)); Setonly_best = True
to generate Fig. 13 (settask = 1
for Fig. 13 (a) and settask = 2
for Fig. 13 (b)).
For a comprehensive evaluation of different algorithms, we employ the following widely-adopted performance metrics in SBSR, including nearest neighbor (NN), first tier (FT), second tier (ST), E-measure (E), discounted cumulated gain (DCG), mean average precision (mAP), and precision-recall (PR) curve. We will provide the source code to compute all the aforementioned metrics.
Task #1 (%) | |||||||
---|---|---|---|---|---|---|---|
Rank | Team | NN | FT | ST | E | DCG | mAP |
1 | HCMUS_2 | 92.23 | 86.96 | 92.77 | 49.04 | 95.4 | 90.18 |
2 | CCZU | 2.35 | 1.94 | 3.92 | 0.36 | 38.16 | 2.23 |
3 | HIT | 1.08 | 1.54 | 3.1 | 0.11 | 36.29 | 2.05 |
Task #2 (%) | |||||||
---|---|---|---|---|---|---|---|
Rank | Team | NN | FT | ST | E | DCG | mAP |
1 | HCMUS_2 | 71.16 | 61.29 | 71.81 | 25.18 | 86.18 | 67.31 |
2 | HCMUS_1 | 39.73 | 44.71 | 63.1 | 14.47 | 77.17 | 46.67 |
3 | HIT | 10.93 | 11.13 | 20.58 | 3.86 | 60.18 | 15.15 |
4 | CCZU | 10.23 | 9.85 | 19.52 | 3.08 | 58.75 | 10.09 |
For more details, please contact Prof. Jie Qin.