Skip to content
/ P2LR Public

code for "Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification" in AAAI2022

Notifications You must be signed in to change notification settings

JeyesHan/P2LR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python 3.6 PyTorch 1.1

Probabilistic uncertainty guided Progressive Label Refinery (P2LR)

The official implementation for the Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification which is accepted by AAAI-2022. Note that this repo in build upon MMT.

framework

What's New

[Mar. 28th, 2022]

  • We make the repo public available.

[Dec 21st, 2021]

  • We clean up our code and submit the first commit to github.

Installation

git clone [email protected]:JeyesHan/P2LR.git
cd P2LR
pip install -r requirements.txt

Prepare Datasets

cd examples && mkdir data

Download the raw datasets DukeMTMC-reID, Market-1501, MSMT17, and then unzip them under the directory like

MMT/examples/data
├── dukemtmc
│   └── DukeMTMC-reID
├── market1501
│   └── Market-1501-v15.09.15
└── msmt17
    └── MSMT17_V1

Custom Datasets

Change Line 24 of P2LR/datasets/custom.py to the path of your_custom_dataset. If your have multiple custom datasets, you can copy and rewrite P2LR/datasets/custom.py according to your data.

MMT/examples/data
├── dukemtmc
│   └── DukeMTMC-reID
├── market1501
│   └── Market-1501-v15.09.15
└── custom
    └── your_custom_dataset
        |── trianval
        |── probe
        └── gallery
    

Example #1:

Transferring from DukeMTMC-reID to Market-1501 on the backbone of ResNet-50, i.e. Duke-to-Market (ResNet-50).

Train

We utilize 4 TITAN XP GPUs for training.

An explanation about the number of GPUs and the size of mini-batches:

  • We adopted 4 GPUs with a batch size of 64, since we found 16 images out of 4 identities in a mini-batch benefits the learning of BN layers, achieving optimal performance.
  • It is fine to try other hyper-parameters, i.e. GPUs and batch sizes. I recommend to remain a mini-batch of 16 images for the BN layers, e.g. use a batch size of 32 for 2 GPUs training, etc.

Stage I: Pre-training on the source domain

sh scripts/pretrain.sh dukemtmc market1501 resnet50 1
sh scripts/pretrain.sh dukemtmc market1501 resnet50 2

Stage II: End-to-end training with P2LR

We utilized K-Means clustering algorithm in the paper.

sh scripts/train_P2LR_kmeans.sh dukemtmc market1501 resnet50 500 0.3

Test

We utilize 1 GPU for testing. Test the trained model with best performance by

sh scripts/test.sh market1501 resnet50 logs/dukemtmcTOmarket1501/resnet-P2LR-500/model_best.pth.tar

Other Examples:

Market-to-Duke (ResNet-50)

# pre-training on the source domain
sh scripts/pretrain.sh market1501 dukemtmc resnet50 1
sh scripts/pretrain.sh market1501 dukemtmc resnet50 2
# end-to-end training with P2LR
sh scripts/train_P2LR_kmeans.sh market1501 dukemtmc resnet50 700 0.2
# testing the best model
sh scripts/test.sh dukemtmc resnet logs/market1501TOdukemtmc/resnet-P2LR-700/model_best.pth.tar

Market-to-MSMT (ResNet-50)

# pre-training on the source domain
sh scripts/pretrain.sh market1501 msmt17 resnet50 1
sh scripts/pretrain.sh market1501 msmt17 resnet50 2
# end-to-end training with P2LR
sh scripts/train_P2LR_kmeans.sh market1501 msmt17 resnet50 1500 0.3
# testing the best model
sh scripts/test.sh msmt17 resnet logs/market1501TOmsmt17/resnet-P2LR-1500/model_best.pth.tar

Duke-to-MSMT (ResNet-50)

# pre-training on the source domain
sh scripts/pretrain.sh dukemtmc msmt17 resnet50 1
sh scripts/pretrain.sh dukemtmc msmt17 resnet50 2
# end-to-end training with P2LR
sh scripts/train_P2LR_kmeans.sh dukemtmc msmt17 resnet50 1500 0.3
# testing the best model
sh scripts/test.sh msmt17 resnet logs/dukemtmcTOmsmt17/resnet-P2LR-1500/model_best.pth.tar

Reported Results

The reported results of this repo on four main-stream UDA Re-ID benchmarks are listed below. results

Hint

the default epoch used in our paper in 100 for four tasks. But we currently find that epoch=60 for D2M achieves similar performance. It saves training time by setting epoch to a lower value. We will test with different epochs as a TODO item and update the results in the README.

Citation

If you find this code useful for your research, please cite our paper

@misc{han2021delving,
      title={Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification}, 
      author={Jian Han and Yali li and Shengjin Wang},
      year={2021},
      eprint={2112.14025},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

code for "Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification" in AAAI2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published