Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I install Sageattention, the IPA will report an error. I can make sure that Sageattention has been installed successfully and runs successfully in Trellis #794

Open
laitianwen opened this issue Feb 27, 2025 · 1 comment

Comments

@laitianwen
Copy link

No description provided.

@laitianwen
Copy link
Author

!!! Exception during processing !!! headdim should be in [64, 96, 128].
Traceback (most recent call last):
File "/root/ComfyUI/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/root/ComfyUI/execution.py", line 202, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/root/ComfyUI/execution.py", line 174, in map_node_over_list
process_inputs(input_dict, i)
File "/root/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/root/ComfyUI/custom_nodes/comfyui_ipadapter_plus/IPAdapterPlus.py", line 848, in apply_ipadapter
work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, **ipa_args)
File "/root/ComfyUI/custom_nodes/comfyui_ipadapter_plus/IPAdapterPlus.py", line 376, in ipadapter_execute
img_cond_embeds = encode_image_masked(clipvision, image, batch_size=encode_batch_size, tiles=enhance_tiles, ratio=enhance_ratio, clipvision_size=clipvision_size)
File "/root/ComfyUI/custom_nodes/comfyui_ipadapter_plus/utils.py", line 242, in encode_image_masked
embeds = encode_image_masked
(clip_vision, image, mask, batch_size, clipvision_size=clipvision_size)
File "/root/ComfyUI/custom_nodes/comfyui_ipadapter_plus/utils.py", line 299, in encode_image_masked

out = clip_vision.model(pixel_values=pixel_values, intermediate_output=-2)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/ComfyUI/comfy/clip_model.py", line 217, in forward
x = self.vision_model(*args, **kwargs)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/ComfyUI/comfy/clip_model.py", line 199, in forward
x, i = self.encoder(x, mask=None, intermediate_output=intermediate_output)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/ComfyUI/comfy/clip_model.py", line 70, in forward
x = l(x, mask, optimized_attention)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/ComfyUI/comfy/clip_model.py", line 51, in forward
x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/ComfyUI/comfy/clip_model.py", line 21, in forward
out = optimized_attention(q, k, v, self.heads, mask)
File "/root/ComfyUI/comfy/ldm/modules/attention.py", line 441, in attention_pytorch
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
File "/root/ComfyUI/lib/python3.10/site-packages/sageattention/core.py", line 82, in sageattn
assert headdim in [64, 96, 128], "headdim should be in [64, 96, 128]."
AssertionError: headdim should be in [64, 96, 128].

Prompt executed in 9.57 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant