-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems in inputting 3 and 5 channel images #3
Comments
Hi, I'm happy to see that you're trying out our model on cell images! I think that this bug may be due to the code being designed for one-channel images. I'd like to make the code work for images beyond 1- and 3-channel, but one issue is that the images are loaded with PIL, so could you let me know what filetype your images are? I want to make sure that PIL loads them correctly (or if i should use something besides PIL) Thanks! |
Thank you for the quick response. For 1 channel images, it works well after I tested, thank you for your framwork again. I am currently working as numpy images. |
Hi, I modified the code to try to load your images from np.arrays, rather than through PIL, if the chosen channel count in the It should convert the arrays directly into torch tensors, with the same resizing and normalizing operations as usual. Also, it assumes your np arrays are saved as (N_channels, H, W), and still assumes that your segmentations (if you're using a segmentation-guided model) are saved as image files. Could you test the most recent commit and see if it works with your setup? |
Thank you again for the effort for modifying the code I have tried the code, with my numpy data, overall it works fairly well after I modified some part:
RuntimeError: "compute_indices_weights_nearest" not implemented for 'Int' My solution was to add .float() at the back of the code (preprocess(F.interpolate(torch.tensor(np.load(image)).unsqueeze(0).float(), size=(config.image_size, config.image_size))) for image in examples["image"])
(I did not save an error code for this) My solution was to squeeze it back right after the preprocess code I am currently not working on segmentation guided model, but soon to be. So I will update if there is problem in segmentation guided model Thank you for the code. It is really easy to see as well since they do not have any messy lines. If my solution seems bad or have a greater idea for solving the solution, it will be really appreciated to hear the ideas as well |
Thanks for debugging this! Your solution is good, I hadn't considered those errors; I'll go ahead and add your fixes as a commit 9f532bb (or let me know if you want to add it as a PR instead). Closing for now since your specific issue seems resolved, but please add a new issue if anything else comes up :) |
Firstly, thank you for your work on making a clean model for medical image generation |
Hi, My apologies for the late reply! The setting segmentation-guided-diffusion/training.py Line 50 in 6ae1b3e
is experimental (not fully tested), and was only for testing class-conditional classifier-free guided image generation. In your case of standard segmentation-guided generation, you want to leave it off/ set class_conditional: bool = False; in other words, use the default settings. To fix your bug, did you use |
Firstly, thank you for your work on making a clean model for medical image generation
I have problems in trying to insert my 5 channel cell images in the model.
My problem shows:
RuntimeError: Given groups=1, weight of size [128, 5, 3, 3], expected input[1, 3 , 256, 256] to have 5 channels, but got 3 channels instead
where it starts at:
noise_pred = model(sample=noisy_images, timestep=timesteps, return_dict=False)[0]
in training.py file
I have inserted and checked noisy images shape as torch.size(5, 256, 256) before it enters the model code, and timestep as torch.size(5).
same problem arises when I use 3 channel images, but the problem arises are:
RuntimeError: Given groups=1, weight of size [128, 3, 3, 3], expected input[1, 2 , 256, 256] to have 3 channels, but got 2 channels instead
Whenever it goes into to the model, the size of the channel changes for some reason.
For reference, the model for 5 channel is:
Thank you so much if you can help out on that.
The text was updated successfully, but these errors were encountered: