The official implementation for the Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification which is accepted by AAAI-2022. Note that this repo in build upon MMT.
- We make the repo public available.
- We clean up our code and submit the first commit to github.
git clone [email protected]:JeyesHan/P2LR.git
cd P2LR
pip install -r requirements.txt
cd examples && mkdir data
Download the raw datasets DukeMTMC-reID, Market-1501, MSMT17, and then unzip them under the directory like
MMT/examples/data
├── dukemtmc
│ └── DukeMTMC-reID
├── market1501
│ └── Market-1501-v15.09.15
└── msmt17
└── MSMT17_V1
Change Line 24 of P2LR/datasets/custom.py to the path of your_custom_dataset. If your have multiple custom datasets, you can copy and rewrite P2LR/datasets/custom.py according to your data.
MMT/examples/data
├── dukemtmc
│ └── DukeMTMC-reID
├── market1501
│ └── Market-1501-v15.09.15
└── custom
└── your_custom_dataset
|── trianval
|── probe
└── gallery
Transferring from DukeMTMC-reID to Market-1501 on the backbone of ResNet-50, i.e. Duke-to-Market (ResNet-50).
We utilize 4 TITAN XP GPUs for training.
An explanation about the number of GPUs and the size of mini-batches:
- We adopted 4 GPUs with a batch size of 64, since we found 16 images out of 4 identities in a mini-batch benefits the learning of BN layers, achieving optimal performance.
- It is fine to try other hyper-parameters, i.e. GPUs and batch sizes. I recommend to remain a mini-batch of 16 images for the BN layers, e.g. use a batch size of 32 for 2 GPUs training, etc.
sh scripts/pretrain.sh dukemtmc market1501 resnet50 1
sh scripts/pretrain.sh dukemtmc market1501 resnet50 2
We utilized K-Means clustering algorithm in the paper.
sh scripts/train_P2LR_kmeans.sh dukemtmc market1501 resnet50 500 0.3
We utilize 1 GPU for testing. Test the trained model with best performance by
sh scripts/test.sh market1501 resnet50 logs/dukemtmcTOmarket1501/resnet-P2LR-500/model_best.pth.tar
Market-to-Duke (ResNet-50)
# pre-training on the source domain
sh scripts/pretrain.sh market1501 dukemtmc resnet50 1
sh scripts/pretrain.sh market1501 dukemtmc resnet50 2
# end-to-end training with P2LR
sh scripts/train_P2LR_kmeans.sh market1501 dukemtmc resnet50 700 0.2
# testing the best model
sh scripts/test.sh dukemtmc resnet logs/market1501TOdukemtmc/resnet-P2LR-700/model_best.pth.tar
Market-to-MSMT (ResNet-50)
# pre-training on the source domain
sh scripts/pretrain.sh market1501 msmt17 resnet50 1
sh scripts/pretrain.sh market1501 msmt17 resnet50 2
# end-to-end training with P2LR
sh scripts/train_P2LR_kmeans.sh market1501 msmt17 resnet50 1500 0.3
# testing the best model
sh scripts/test.sh msmt17 resnet logs/market1501TOmsmt17/resnet-P2LR-1500/model_best.pth.tar
Duke-to-MSMT (ResNet-50)
# pre-training on the source domain
sh scripts/pretrain.sh dukemtmc msmt17 resnet50 1
sh scripts/pretrain.sh dukemtmc msmt17 resnet50 2
# end-to-end training with P2LR
sh scripts/train_P2LR_kmeans.sh dukemtmc msmt17 resnet50 1500 0.3
# testing the best model
sh scripts/test.sh msmt17 resnet logs/dukemtmcTOmsmt17/resnet-P2LR-1500/model_best.pth.tar
The reported results of this repo on four main-stream UDA Re-ID benchmarks are listed below.
the default epoch used in our paper in 100 for four tasks. But we currently find that epoch=60 for D2M achieves similar performance. It saves training time by setting epoch to a lower value. We will test with different epochs as a TODO item and update the results in the README.
If you find this code useful for your research, please cite our paper
@misc{han2021delving,
title={Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification},
author={Jian Han and Yali li and Shengjin Wang},
year={2021},
eprint={2112.14025},
archivePrefix={arXiv},
primaryClass={cs.CV}
}