Pytorch implementation of adapted pix2pixHD method for high-resolution (e.g. 1080x1080) virtual staining via image-to-image translation.
- Linux or macOS
- Python 3
- NVIDIA GPU (11G memory or larger) + CUDA cuDNN
Create a new environment
conda create -n can_virtual_staining_for_high_thorughout_screening_generalize python=3.8
conda activate can_virtual_staining_for_high_thorughout_screening_generalize
pip install -e .
Clone this repo:
git clone [email protected]:krulllab/can_virtual_staining_for_high_thorughout_screening_generalize.git
cd src
python ./src/train.py --dataroot ../path_to_data/ --data_type 16 --batchSize 4 --checkpoints_dir ../results/ --label_nc 0 --name experiment1 --no_instance --resize_or_crop none --input_nc 1 --output_nc 1 --seed 42 --no_vgg_loss --nThreads 1 --loadSize 256 --ndf 32 --norm instance --use_dropout --fp16 --gpu_ids 1
- To view training results, please launch
tensorboard --logdir opt.checpoints_dir
python ./src/train.py --dataroot ../path_to_data/ --data_type 16 --batchSize 4 --checkpoints_dir ../../results --label_nc 0 --name experiment1 --no_instance --resize_or_crop none --input_nc 1 --output_nc 1 --seed 42 --no_vgg_loss --nThreads 1 --loadSize 256 --ndf 32 --norm instance --use_dropout --fp16 --gpu_ids 1,2,3
- To train with mixed precision support, please first install apex from: https://github.com/NVIDIA/apex
- You can then train the model by adding
--fp16
. For example,
python ./src/train.py --dataroot ../path_to_data/ --data_type 16 --batchSize 4 --checkpoints_dir ../results/experiment1/ --fp16
python test.py --results_dir ../results/inference/ --dataroot ../path_to_data/ --data_type 16 --batchSize 1 --checkpoints_dir ../results/experiment1/
The test results will be saved to a html file here: `./results/
This code borrows heavily from pytorch-CycleGAN-and-pix2pix and pix2pixHD