Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Enscend/BasicDockerMonitor

Open more actions menu

Repository files navigation

Initial Setup

Note: ChatGPT did most of the work on this one...

  1. Start it up using Docker Compose via the _runMonitor.bat or _runMonitor.sh scripts (depending on your system type), because we need to set up as environment variable for the system root.
  • assumes you already have Docker properly set up on your system
  • docker-commpose.yml is set up to restart:always so it will automatically come back up when you restart the computer

Windows

C:\your\repo\folder\> _runMonitor

Linux

user@system:~/your/repo/folder$ sudo chmod +x ./_runMonitor.sh
user@system:~/your/repo/folder$ ./_runMonitor.sh
  1. Navigate to: https://localhost:3000 to load the Grafana UI
  2. Change the default password for Grafana
  3. From the Data sources link in the left-side nav panel, click Add new data source to add Prometheus as a data source
  • Prometheus server URL *: http://prometheus:9090
  • Prometheus type: Prometheus
  • Prometheus version: > 2.50.x
  • Click Save & Test
  1. From the Dashboards link in the left-side nav panel, click New then Import
  • Enter '893' (one that ChatGPT told me about) for Docker monitoring, then click Load
  • Select the Prometheus data source that you created above, then click Import
  1. From the Dashboards link in the left-side nav panel, select the Docker and system monitoring dashboard to view your stats

vLLM Metrics

This stack is pre-configured to scrape metrics from a vLLM instance running on the host at port 8000.

Prerequisites:

  • Start vLLM with metrics enabled (the flag varies by version):
    --enable-metrics          # older versions
    --prometheus-port 8000    # some versions expose on a separate port
    
    Verify the endpoint is live: curl http://localhost:8000/metrics

After starting the stack:

  1. Open http://localhost:9090/targets — the vllm job should show status UP
  2. In Prometheus, query a metric like vllm:num_requests_running to confirm data is flowing
  3. In Grafana, import a vLLM dashboard: Dashboards > New > Import, then search the Grafana dashboard library for "vLLM"

Linux note: The extra_hosts: host.docker.internal:host-gateway entry in docker-compose.yml is what allows Prometheus (inside the monitoring bridge network) to reach the vLLM process on the host network. On Docker Desktop (Windows/Mac) this resolves automatically.

About

Basic Docker monitoring setup using public Docker Compose and Docker images from Grafana, Prometheus, and CAdvisor

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Morty Proxy This is a proxified and sanitized view of the page, visit original site.