Backbones
"Thing" & "Stuff"
Accuracy
$\text{Acc}$ $\text{mAcc}^\text{D, I, C}$ $\text{mIoU}^\text{D, I, C}$ $\text{mDice}^\text{D, I, C}$ - Worst-case metrics
Calibration Error
$\text{ECE}^\text{D, I}$ $\text{SCE}^\text{D, I}$
- Hardware: at least 12GB GPU
- Software:
timm
MMSegmentation
(only for preparing datasets)
- To use JDTloss in your codebase
from losses.jdt_loss import JDTLoss
# Jaccard loss (default): JDTLoss()
# Dice loss: JDTLoss(alpha=0.5, beta=0.5)
criterion = JDTLoss()
for (image, label) in data_loader:
logits = model(image)
loss = criterion(logits, label)
- To use fine-grained mIoUs in your codebase
from metrics.metric_group import MetricGroup
metric_group = MetricGroup(num_classes=..., ignore_index=...)
for (image, label) in data_loader:
logits = model(image)
prob = logits.log_softmax(dim=1).exp()
# Both `prob` and `label` need to be on CPU
metric_group.add(prob, label)
results = metric_group.value()
- Training with hard labels
python main.py \
--data_dir "path/to/data_dir" \
--output_dir "path/to/output_dir" \
--model_yaml "deeplabv3plus_resnet101d" \
--data_yaml "cityscapes" \
--label_yaml "hard" \
--loss_yaml "jaccard_ic_present_all" \
--schedule_yaml "40k_iters" \
--optim_yaml "adamw_lr6e-5" \
--test_yaml "test_iou"
-
Training with soft labels
Label Smoothing
- You may need to tune
$\epsilon$ accordingly.
python main.py \ --data_dir "path/to/data_dir" \ --output_dir "path/to/output_dir" \ --model_yaml "deeplabv3_resnet50d" \ --data_yaml "cityscapes" \ --label_yaml "ls" \ --loss_yaml "jaccard_d_present_all" \ --schedule_yaml "40k_iters" \ --optim_yaml "adamw_lr6e-5" \ --test_yaml "test_iou"
Knowledge Distillation
- Step 1: Train a teacher with label smoothing. You are encouraged to repeat the training script at least three times and choose the model with the best performance as the teacher.
- Step 2: Train a student.
python main.py \ --teacher_checkpoint "path/to/teacher_checkpoint" --data_dir "path/to/data_dir" \ --output_dir "path/to/output_dir" \ --model_yaml "deeplabv3_resnet18d" \ --teacher_model_yaml "deeplabv3_resnet50d" \ --data_yaml "cityscapes" \ --label_yaml "kd" \ --loss_yaml "jaccard_d_present_all" \ --schedule_yaml "40k_iters" \ --optim_yaml "adamw_lr6e-5" \ --test_yaml "test_iou"
Multiple Annotators
python main.py \ --data_dir "path/to/data_dir" \ --output_dir "path/to/output_dir" \ --model_yaml "unet_resnet50d" \ --data_yaml "qubiq_brain_growth_fold0_task0" \ --label_yaml "mr" \ --loss_yaml "jaccard_d_present_all" \ --schedule_yaml "150_epochs" \ --optim_yaml "adamw_lr6e-5" \ --test_yaml "test_iou"
- You may need to tune
We express our gratitude to the creators and maintainers of the following projects: pytorch-image-models
, MMSegmentation
, segmentation_models.pytorch
, structure_knowledge_distillation
@InProceedings{Wang2023Revisiting,
title = {Revisiting Evaluation Metrics for Semantic Segmentation: Optimization and Evaluation of Fine-grained Intersection over Union},
author = {Wang, Zifu and Berman, Maxim and Rannen-Triki, Amal and Torr, Philip H.S. and Tuia, Devis and Tuytelaars, Tinne and Van Gool, Luc and Yu, Jiaqian and Blaschko, Matthew B.},
booktitle = {NeurIPS},
year = {2023}
}
@InProceedings{Wang2023Jaccard,
title = {Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels},
author = {Wang, Zifu and Ning, Xuefei and Blaschko, Matthew B.},
booktitle = {NeurIPS},
year = {2023}
}
@InProceedings{Wang2023Dice,
title = {Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels},
author = {Wang, Zifu and Popordanoska, Teodora and Bertels, Jeroen and Lemmens, Robin and Blaschko, Matthew B.},
booktitle = {MICCAI},
year = {2023}
}