-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error occurred when executing T5TextEncode #ELLA: (RX580 i39100f Windows11 32gb Ram) #42
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Error occurred when executing T5TextEncode #ELLA:
"addmm_impl_cpu_" not implemented for 'Half'
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\ella.py", line 228, in encode
cond = text_encoder_model(text, max_length=max_length)
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\model.py", line 158, in call
outputs = self.model(text_input_ids, attention_mask=attention_mask) # type: ignore
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1980, in forward
encoder_outputs = self.encoder(
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1115, in forward
layer_outputs = layer_module(
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 695, in forward
self_attention_outputs = self.layer[0](
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 602, in forward
attention_output = self.SelfAttention(
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 521, in forward
query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
The text was updated successfully, but these errors were encountered: