最終更新:2024-11-01 (金) 22:16:17 (532d)  

SDPA
Top / SDPA

Scaled Dot Product Attention

--opt-sdp-attentionEnable scaled dot product cross-attention layer optimization; requires PyTorch 2.
May results in faster speeds than using xFormers on some systems but requires more VRAM. (non-deterministic)
--opt-sdp-no-mem-attentionEnable scaled dot product cross-attention layer optimization without memory efficient attention, makes image generation deterministic; requires PyTorch 2.
May results in faster speeds than using xFormers on some systems but requires more VRAM. (deterministic, slightly slower than --opt-sdp-attention and uses more VRAM)

参考