You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I fine tuned the segnext model on a custom dataset (including 6 types of backgrounds) and found that the value of the label was normal in the loader (from 0 to num_classes-1), but when passed into the class_weight calculation, there was a sudden addition of 255 values in the label. After debugging, I found that these values were caused by seg_pad_val.
label's val is error-free in the dataloader
label's val is added some '255' val
and in class_weight calculation (/conda/lib/python3.9/site-packages/mmsegmentation-1.2.2-py3.9.egg/mmseg/models/losses/cross_entropy_loss.py):
if (avg_factorisNone) andreduction=='mean':
ifclass_weightisNone:
forclsinlabel:
print("cls: ", cls)
ifavg_non_ignore:
avg_factor=label.numel() - (label==ignore_index).sum().item()
else:
avg_factor=label.numel()
else:
# the average factor should take the class weights into account# print("label: ", label)# print("label_shape: ", label.shape)# print("class_weight: ", class_weight)# print("cls in label:")# for cls in label:# print("cls: ", cls)# print("class_weight[cls]: ",class_weight[cls])label_weights=torch.stack([class_weight[cls] forclsinlabel
]).to(device=class_weight.device)
ifavg_non_ignore:
label_weights[label==ignore_index] =0avg_factor=label_weights.sum()
label_weights = torch.stack([class_weight[cls] for cls in label ]).to(device=class_weight.device)
the cls in class_weight[cls] will be '255', and this code will cause index out of bound
when I changed the val of seg_pad_val from '255' to '0', the label's val was back to normal
# Copyright (c) OpenMMLab. All rights reserved.frommmseg.registryimportDATASETSfrom .basesegdatasetimportBaseSegDataset@DATASETS.register_module()classFireDataset(BaseSegDataset):
"""ADE20K dataset. In segmentation map annotation for ADE20K, 0 stands for background, which is not included in 150 categories. ``reduce_zero_label`` is fixed to True. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to '.png'. """METAINFO=dict(
classes=('fire','smoke_black','smoke_white','smoke_yellow','spark'),
palette=[[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
[4, 200, 3]])
def__init__(self,
img_suffix='.jpg',
seg_map_suffix='.png',
reduce_zero_label=True,
**kwargs) ->None:
super().__init__(
img_suffix=img_suffix,
seg_map_suffix=seg_map_suffix,
reduce_zero_label=reduce_zero_label,
**kwargs)
The text was updated successfully, but these errors were encountered:
SquirrelEdison
changed the title
Is there a conflict between class_ceight and seg_ pad val?
Is there a conflict between class_weight and seg_ pad_val?
Dec 25, 2024
I fine tuned the segnext model on a custom dataset (including 6 types of backgrounds) and found that the value of the label was normal in the loader (from 0 to num_classes-1), but when passed into the class_weight calculation, there was a sudden addition of 255 values in the label. After debugging, I found that these values were caused by seg_pad_val.
label's val is error-free in the dataloader
label's val is added some '255' val
and in class_weight calculation (/conda/lib/python3.9/site-packages/mmsegmentation-1.2.2-py3.9.egg/mmseg/models/losses/cross_entropy_loss.py):
label_weights = torch.stack([class_weight[cls] for cls in label ]).to(device=class_weight.device)
the cls in class_weight[cls] will be '255', and this code will cause index out of bound
when I changed the val of seg_pad_val from '255' to '0', the label's val was back to normal
segnext model config file
:_base_/datasets/fire6dataset.py
:mmseg/datasets/fire_dataset.py
:The text was updated successfully, but these errors were encountered: