Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error occurred when executing T5TextEncode #ELLA: (RX580 i39100f Windows11 32gb Ram) #42

Open
KillyTheNetTerminal opened this issue May 7, 2024 · 6 comments

Comments

@KillyTheNetTerminal
Copy link

Error occurred when executing T5TextEncode #ELLA:

"addmm_impl_cpu_" not implemented for 'Half'

File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\ella.py", line 228, in encode
cond = text_encoder_model(text, max_length=max_length)
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\model.py", line 158, in call
outputs = self.model(text_input_ids, attention_mask=attention_mask) # type: ignore
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1980, in forward
encoder_outputs = self.encoder(
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1115, in forward
layer_outputs = layer_module(
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 695, in forward
self_attention_outputs = self.layer[0](
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 602, in forward
attention_output = self.SelfAttention(
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 521, in forward
query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)

@JettHu
Copy link
Collaborator

JettHu commented May 8, 2024

use --fp32-text-enc refer to

@KillyTheNetTerminal
Copy link
Author

oh my god thanks it worked!

@KillyTheNetTerminal
Copy link
Author

imagen_2024-05-08_120449476
Exactly the same workflow with the same model but this is the output, something I missing?
imagen_2024-05-08_120523659

@JettHu
Copy link
Collaborator

JettHu commented May 9, 2024

It looks like using --fp32-text-enc affects the results. refer to

The results on my machine are similar to yours.

image

@JettHu
Copy link
Collaborator

JettHu commented May 9, 2024

It looks like using --fp32-text-enc affects the results. refer to

The results on my machine are similar to yours.

image

The effect is somewhat different on some GPU models that cannot run fp16. This may be something we need to pay attention to in the future.
cc @budui

@KillyTheNetTerminal
Copy link
Author

there is a way to solve this? RX580 can't use fp16?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants