You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
🚀 Feature
Transforms that shift voxel intensity, such as intensity flipping (i.e. 1 - val, val in [0, 1]), cluster and remap, contrast jitter etc.
Motivation
I am working on a spine segmentation problem on MRI images where I need to train a model to perform in multiple pulse sequence modalities, but training data has only a single modality. As such, models tend to pick up on intensity features and perform very poorly on different modalities with different intensity distributions. (see the middle image vs others)
Pitch
Adding transforms to shift intensity features would allow models to pick up on shapes and contours rather than learning intensity values as features.
Alternatives
One very basic approach could be something like this
Transforming intensity in a physical plausible manner, is not easy, and there are very few effective transform ( the only one in torchio was RandomGamma, but from my experience, It was not effective enough) . The transform you propose sound good because it makes a radical change by inverting the intensity ... so ...worth to add
I would be very interested to know, if this is enough to generalize properly.
The best approach to get a contrast agnostic model, is Billot SynthSeg proposition, since it trains the model with really random contrast, and the results are impressive (it does segment any contrast !)
but one need very good realistic label (including structure in the background), so it may not be easily applicable for your use case
🚀 Feature
Transforms that shift voxel intensity, such as intensity flipping (i.e. 1 - val, val in [0, 1]), cluster and remap, contrast jitter etc.
Motivation
I am working on a spine segmentation problem on MRI images where I need to train a model to perform in multiple pulse sequence modalities, but training data has only a single modality. As such, models tend to pick up on intensity features and perform very poorly on different modalities with different intensity distributions. (see the middle image vs others)
Pitch
Adding transforms to shift intensity features would allow models to pick up on shapes and contours rather than learning intensity values as features.
Alternatives
One very basic approach could be something like this
The text was updated successfully, but these errors were encountered: