Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

JackBinary/sd-cpp-webui

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stable Diffusion UI Launcher

A crappy Gradio-based web interface for stable-diffusion.cpp.

This exists because I wanted a simple way for beginners to use stable-diffusion.cpp, and a 5 minute DuckDuckGo search didn't turn anything up. So here we are.

Features

  • Simple web UI for text-to-image generation
  • Server management (start/stop) directly from the interface
  • Support for custom checkpoints, VAE models, and LoRA
  • GPU support for NVIDIA (CUDA) and AMD/Intel (Vulkan)
  • Configurable generation parameters (sampler, scheduler, CFG, steps)
  • img-to-img
  • Inpainting
  • Saving of UI parameters

Planned Features

  • upscalers
  • high-res-fix
  • prompt library

Installation

Linux

Run the installation script:

chmod +x install.sh
./install.sh

The script will:

  • Detect your GPU type (or let you choose manually)
  • Download the appropriate stable-diffusion.cpp binary
  • Create a Python virtual environment
  • Install required dependencies
  • Create model directories

Windows

Run the PowerShell installation script:

powershell -ExecutionPolicy Bypass -File install.ps1

Manual Installation

For Mac users, other platforms, or if you just enjoy suffering with CPU-only generation (why would you do this to yourself? It's slow enough on a 7600XT):

  1. Download the appropriate release from https://github.com/leejet/stable-diffusion.cpp/releases/latest
  2. Extract to a stable-diffusion-cpp/ directory
  3. Create a Python virtual environment:
    python3 -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  4. Install requirements:
    pip install pillow gradio requests
  5. Create model directories:
    mkdir -p checkpoints vae lora
  6. Launch the UI:
    python ui.py

Note: As of this writing, LoRA support requires a pending pull request to be merged in stable-diffusion.cpp.

Setup

Place your model files in the appropriate directories:

  • checkpoints/ - Stable Diffusion checkpoint models (.safetensors or .ckpt)
  • vae/ - VAE models (optional)
  • lora/ - LoRA models (optional)

Usage

Linux

./run.sh

Windows

.\run.bat

The web interface will open automatically at http://127.0.0.1:7860

Workflow

  1. Select a checkpoint model from the dropdown
  2. (Optional) Select a VAE model
  3. Click "Start Server" to load the model
  4. Enter your positive and negative prompts
  5. Adjust generation parameters if needed
  6. Click "Generate" to create images

Requirements

  • Python >=3.8,<3.12
  • GPU with Vulkan support (AMD/Intel) or CUDA support (NVIDIA)
  • Sufficient RAM/VRAM for your chosen model

Tested on AMD 7600XT and AMD 9070XT.

License

See LICENSE file for details.

About

A simple way to use https://github.com/leejet/stable-diffusion.cpp

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Morty Proxy This is a proxified and sanitized view of the page, visit original site.