Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

This repository provides resources and guidelines to facilitate the integration of Open-WebUI and Langfuse, enabling seamless monitoring and management of AI model usage statistics.

License

Notifications You must be signed in to change notification settings

karaketir16/openwebui-langfuse

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OLLAMA + OPEN-WEBUI + PIPELINES + LANGFUSE

Introduction

This repository provides a setup for integrating OLLAMA, OPEN-WEBUI, PIPELINES, and LANGFUSE using Docker. Follow the steps below to get everything up and running.

Prerequisites

  • Docker and required GPU drivers installed on your system.

Installation

  1. Clone this repository:

    git clone https://github.com/karaketir16/openwebui-langfuse.git
    cd openwebui-langfuse
  2. Run the setup script:

    ./run-compose.sh

    or

    docker compose -f docker-compose.yaml -f langfuse-v3.yaml up -d
    # default driver is nvidia

Configuration

Langfuse Setup

  1. Documentation

    • You can find up-to-date documentation here.
  2. Download the langfuse_filter_pipeline.py file (only if offline):

    • If your setup does not have internet access:
      • You can manually download the script from: https://github.com/open-webui/pipelines/blob/main/examples/filters/langfuse_filter_pipeline.py
      • Or use the local copy provided at: example/langfuse_filter_pipeline.py
  3. Access Langfuse:

    • Open your browser and go to http://localhost:4000.
  4. Create an Admin Account and Project:

    • Create an admin account and then create an organization and a project.
    • Go to Project Settings and create an API key.
    • Retrieve the secret key and public key.

Open-WebUI Setup

  1. Access Open-WebUI:

    • Open your browser and go to http://localhost:3000.
  2. Create an Admin Account:

    • Create an admin account.
  3. Upload the Pipeline Script:

    • Go to Settings -> Admin Settings -> Pipelines.
    • If online, paste this URL:
      https://raw.githubusercontent.com/open-webui/pipelines/refs/heads/main/examples/filters/langfuse_filter_pipeline.py
      
      into the Install from Github URL field and click the download button.
    • If offline or using a custom script, upload langfuse_filter_pipeline.py from your local machine via the Upload Pipeline section.
  4. Configure the Script:

    • After uploading the pipeline, edit its configuration in the UI.
    • Replace the placeholder values as follows:
      • your-secret-key-here → your Langfuse secret key
      • your-public-key-here → your Langfuse public key
      • https://cloud.langfuse.comhttp://langfuse-web:4000 (local address)
  5. Monitor Usage:

    • You can now monitor Open-WebUI usage statistics from Langfuse.

Model Downloading

  1. Access Open-WebUI:

    • Open your browser and go to http://localhost:3000.
  2. Create an Admin Account:

    • Create an admin account if you haven’t already.
  3. Pull Models:

    • Navigate to Settings -> Admin Settings -> Models.
    • Enter a model tag to pull from the Ollama library (e.g., phi3:mini).
    • Press the pull button.

About

This repository provides resources and guidelines to facilitate the integration of Open-WebUI and Langfuse, enabling seamless monitoring and management of AI model usage statistics.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  
Morty Proxy This is a proxified and sanitized view of the page, visit original site.