Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Latest commit

 

History

History
History
26 lines (19 loc) · 955 Bytes

File metadata and controls

26 lines (19 loc) · 955 Bytes
Copy raw file
Download raw file
Outline
Edit and raw actions

Use Flash Attention to save memory and improve speed.

Enabling flash attention for the diffusion model reduces memory usage by varying amounts of MB. eg.:

  • flux 768x768 ~600mb
  • SD2 768x768 ~1400mb

For most backends, it slows things down, but for cuda it generally speeds it up too. At the moment, it is only supported for some models and some backends (like cpu, cuda/rocm, metal).

Run by adding --diffusion-fa to the arguments and watch for:

[INFO ] stable-diffusion.cpp:312  - Using flash attention in the diffusion model

and the compute buffer shrink in the debug log:

[DEBUG] ggml_extend.hpp:1004 - flux compute buffer size: 650.00 MB(VRAM)

Offload weights to the CPU to save VRAM without reducing generation speed.

Using --offload-to-cpu allows you to offload weights to the CPU, saving VRAM without reducing generation speed.

Use quantization to reduce memory usage.

quantization

Morty Proxy This is a proxified and sanitized view of the page, visit original site.