This Docker Compose configuration provides a complete Open-WebUI stack with Ollama for local AI model hosting, following the same architecture patterns as enterprise-grade applications.
- Open-WebUI: Self-hosted web interface for AI chat
- Ollama (AMD ROCm): Local AI model hosting with AMD GPU support
- PostgreSQL: Database for persistent data (optional - uses SQLite by default)
- Redis: Caching layer (optional but recommended)
- Cloudflare Tunnel: Secure remote access (optional)
- Network Isolation: Secure internal networks
- CPU Inference: Ollama runs on CPU (no GPU required)
- ARM64 Compatible: Works on Apple Silicon and ARM64 systems
-
Copy and configure environment variables:
cp env.example .env # Edit .env file with your preferred settings
-
Generate a secret key:
# Generate a secure secret key openssl rand -hex 32 # Add this to WEBUI_SECRET_KEY in your .env file
-
Initialize Docker Swarm (required for this stack):
docker swarm init
-
Create the external Traefik network (overlay):
docker network create --driver=overlay --attachable traefik_public
-
Deploy the stack to Swarm:
docker stack deploy -c docker-compose.yml openwebui
-
Access Open-WebUI:
- Open your browser to
http://localhost:3000
(or the port you configured) - Create your first admin account
- Open your browser to
The minimal configuration requires:
WEBUI_SECRET_KEY
: Generate withopenssl rand -hex 32
CONTAINER_NAME_PREFIX
: Unique prefix for your containersTZ
: Your timezone
Standard username/password authentication with local user accounts.
Configure OAuth/OpenID Connect for single sign-on:
-
Register an App in Entra ID:
- Go to Azure Portal → Entra ID → App registrations
- Create a new registration
- Set redirect URI to:
https://your-domain.com/oauth/callback
- Note the Application (client) ID and create a client secret
-
Configure OAuth settings in
.env
:OAUTH_CLIENT_ID=your-application-client-id OAUTH_CLIENT_SECRET=your-client-secret OPENID_PROVIDER_URL=https://login.microsoftonline.com/your-tenant-id/v2.0 OAUTH_SCOPES=openid email profile OAUTH_PROVIDER_NAME=Entra ID
-
Optional settings:
OAUTH_USERNAME_CLAIM=preferred_username # or 'email' OAUTH_EMAIL_CLAIM=email OAUTH_MERGE_ACCOUNTS_BY_EMAIL=false
You can configure multiple AI providers:
- Set
ENABLE_OLLAMA=1
- Ollama will be available at
http://ollama:11434
internally - Pull models:
docker compose exec ollama ollama pull llama2
- Note: CPU version - models will run on CPU (slower but no GPU required)
- OpenAI: Set
OPENAI_API_KEY
- Anthropic: Set
ANTHROPIC_API_KEY
For CPU-only Ollama deployment:
- Set
ENABLE_OLLAMA=1
- Adjust
OLLAMA_MEMORY_LIMIT=4G
(or higher for larger models) - Note: CPU inference is slower but requires no special hardware
-
Enable Ollama:
ENABLE_OLLAMA=1
-
AMD GPU device access (ROCm):
- The compose mounts
/dev/kfd
and/dev/dri
for AMD GPUs. - Ensure the host has ROCm drivers available.
- The compose mounts
-
Healthcheck:
http://localhost:11434/api/tags
-
Swarm placement constraints:
- Label your AMD GPU nodes and deploy only there:
docker node update --label-add amd_gpu=true <node-name>
- This stack requires
node.platform.arch==amd64
andnode.platform.os==linux
.
- Label your AMD GPU nodes and deploy only there:
No additional configuration needed. Data stored in volume.
ENABLE_POSTGRES=1
POSTGRES_PASSWORD=your_secure_password
This stack is Swarm-ready and uses overlay networks:
stack
(overlay, attachable): Internal stack network for all servicestraefik_public
(external overlay): For Traefik to route public/private domains
Notes:
- Services expecting proxy traffic (e.g.,
open-webui
) are attached to bothstack
andtraefik_public
. - All other services are attached only to
stack
.
docker stack deploy -c docker-compose.yml openwebui
docker stack rm openwebui
docker service logs -f openwebui_open-webui | cat
docker stack deploy -c docker-compose.yml openwebui
docker compose exec ollama ollama pull llama2
docker compose exec ollama ollama pull codellama
docker compose exec ollama ollama pull mistral
docker compose exec ollama ollama list
docker compose exec ollama ollama rm model_name
- Change default passwords in the
.env
file - Generate a strong secret key for
WEBUI_SECRET_KEY
- Disable signup (
ENABLE_SIGNUP=false
) after creating admin accounts or when using SSO - Use Cloudflare Tunnel for secure remote access instead of port forwarding
- Enable authentication (
WEBUI_AUTH=true
) - SSO Security:
- Keep OAuth client secrets secure and rotate them regularly
- Use HTTPS for all OAuth redirect URIs
- Configure appropriate scopes in Entra ID (minimum required permissions)
- Consider setting
OAUTH_MERGE_ACCOUNTS_BY_EMAIL=true
if users might have both local and SSO accounts
- Permission errors: Check volume permissions and ensure container can write to data directories
- Port conflicts: Change
OPEN_WEBUI_PORT
in.env
- Memory issues: Increase
OLLAMA_MEMORY_LIMIT
for larger models - Network issues: Check Docker network connectivity
- OAuth/SSO Issues:
- Verify redirect URI matches exactly (including protocol and path)
- Check that client secret hasn't expired
- Ensure
OPENID_PROVIDER_URL
includes correct tenant ID - Verify required API permissions are granted in Entra ID
- Check logs for specific OAuth error messages
Check service health:
docker stack ps openwebui
All services include health checks for monitoring.
View specific service logs:
docker compose logs -f [service_name]
docker run --rm -v openwebui_open_webui_data:/data -v $(pwd):/backup alpine tar czf /backup/openwebui-backup.tar.gz -C /data .
docker run --rm -v openwebui_open_webui_data:/data -v $(pwd):/backup alpine tar xzf /backup/openwebui-backup.tar.gz -C /data
All volumes now use Docker's default local storage. Data is stored in Docker-managed volumes under /var/lib/docker/volumes/
(on most systems). If you need custom mount points, you can modify the volume definitions directly in the docker-compose.yml
file.
Adjust memory limits:
OPEN_WEBUI_MEMORY_LIMIT=4G
For issues and questions:
- Open-WebUI: https://github.com/open-webui/open-webui
- Ollama: https://github.com/ollama/ollama
- Docker Compose: https://docs.docker.com/compose/