Skip to content

Commit

Permalink
Update attention_processor.py
Browse files Browse the repository at this point in the history
refactor of picking a set element
  • Loading branch information
tenpercent authored Jan 15, 2025
1 parent 36d34fd commit 2d2411e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/diffusers/models/attention_processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ def set_use_memory_efficient_attention_xformers(
dtype = None
if attention_op is not None:
op_fw, op_bw = attention_op
dtype = list(op_fw.SUPPORTED_DTYPES)[0]
dtype, *_ = op_fw.SUPPORTED_DTYPES
q = torch.randn((1, 2, 40), device="cuda", dtype=dtype)
_ = xformers.ops.memory_efficient_attention(q, q, q)
except Exception as e:
Expand Down

0 comments on commit 2d2411e

Please sign in to comment.