You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
class QuantizedFluxTransformer2DModel(QuantizedDiffusersModel):
base_class = FluxTransformer2DModel
transformer = FluxTransformer2DModel.from_pretrained(
'black-forest-labs/FLUX.1-Fill-dev', subfolder="transformer", torch_dtype=torch.bfloat16,
).to("cuda")
qtransformer = QuantizedFluxTransformer2DModel.quantize(transformer, weights=qfloat8)
# for param in qtransformer.parameters(): param.data = param.data.contiguous() # useless
qtransformer.save_pretrained('fluxfill_transformer_fp8')
Logs
ValueError: You are trying to save a non contiguous tensor: `time_text_embed.timestep_embedder.linear_1.weight._data` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.
System Info
python==3.12
torch==2.4.0 + cu121
transformers==4.47.0
optimum-quanto==0.2.6
diffusers main from 12.23
Who can help?
No response
The text was updated successfully, but these errors were encountered:
Describe the bug
QuantizedFluxTransformer2DModel save bug
Reproduction
Logs
System Info
python==3.12
torch==2.4.0 + cu121
transformers==4.47.0
optimum-quanto==0.2.6
diffusers main from 12.23
Who can help?
No response
The text was updated successfully, but these errors were encountered: