torch 2.0.1, mmcv, mmdet
-
run extract_data.py for data pre-processing.
datasets ├── cpm17 │ ├── extract_data.py │ ├── test │ └── train ├── cpm17_test_files.npy ├── cpm17_train_files.npy ├── kumar │ ├── extract_data.py │ ├── images │ └── labels ├── kumar_test_files.npy ├── kumar_train_files.npy ├── pannuke │ ├── extract_data.py │ ├── Fold 1 │ ├── Fold 2 │ ├── Fold 3 │ ├── Images │ └── Masks ├── pannuke123_test_files.npy ├── pannuke123_train_files.npy ├── pannuke123_val_files.npy ├── pannuke213_test_files.npy ├── pannuke213_train_files.npy ├── pannuke213_val_files.npy ├── pannuke321_test_files.npy ├── pannuke321_train_files.npy └── pannuke321_val_files.npy
-
Train the prompter
cd prompter python main.py --config dpa_pannuke123.py --output_dir dpa_pannuke123 --model-ema # python main.py --config dpa_pannuke213.py --output_dir dpa_pannuke123 --model-ema # python main.py --config dpa_pannuke321.py --output_dir dpa_pannuke123 --model-ema
-
Use the trained prompter to generate nuclei prompts for the validation and test sets.
python predict_prompts.py --config dpa_pannuke123.py --resume checkpoint/dpa_pannuke123/best.pth # python predict_prompts.py --config dpa_pannuke213.py --resume checkpoint/dpa_pannuke213/best.pth # python predict_prompts.py --config dpa_pannuke321.py --resume checkpoint/dpa_pannuke321/best.pth
-
Download SAM's pre-trained weights into segmentor/pretrained and train the segmentor.
cd segmentor torchrun --nproc_per_node=4 main.py --config pannuke123_b.py --output_dir pannuke123_b # torchrun --nproc_per_node=4 main.py --config pannuke213_b.py --output_dir pannuke213_b # torchrun --nproc_per_node=4 main.py --config pannuke321_b.py --output_dir pannuke321_b
see test.sh
Kumar | CPM-17 | PanNuke123 | PanNuke213 | PanNuke321 | |
---|---|---|---|---|---|
Prompter | OneDrive | OneDrive | OneDrive | OneDrive | OneDrive |
Segmentor-B | OneDrive | OneDrive | OneDrive | OneDrive | OneDrive |
Segmentor-L | OneDrive | OneDrive | OneDrive | OneDrive | OneDrive |
Segmentor-H | OneDrive | OneDrive | OneDrive | OneDrive | OneDrive |
If you have any questions or concerns, feel free to report issues or directly contact us (Zhongyi Shui [email protected]).
If you find this code useful for your research, please cite us using the following BibTeX entry.
@inproceedings{shui2024dpa,
title={DPA-P2PNet: Deformable Proposal-Aware P2PNet for Accurate Point-Based Cell Detection},
author={Shui, Zhongyi and Zheng, Sunyi and Zhu, Chenglu and Zhang, Shichuan and Yu, Xiaoxuan and Li, Honglin and Li, Jingxiong and Chen, Pingyi and Yang, Lin},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={5},
pages={4864--4872},
year={2024}
}
@inproceedings{shui2025unleashing,
title={Unleashing the power of prompt-driven nucleus instance segmentation},
author={Shui, Zhongyi and Zhang, Yunlong and Yao, Kai and Zhu, Chenglu and Zheng, Sunyi and Li, Jingxiong and Li, Honglin and Sun, Yuxuan and Guo, Ruizhe and Yang, Lin},
booktitle={European Conference on Computer Vision},
pages={288--304},
year={2025},
organization={Springer}
}