Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

vllm-project/semantic-router

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Latest News 🔥


Innovations ✨

architecture

Intelligent Routing 🧠

Auto-Reasoning and Auto-Selection of Models

An Mixture-of-Models (MoM) router that intelligently directs OpenAI API requests to the most suitable models from a defined pool based on Semantic Understanding of the request's intent (Complexity, Task, Tools).

This is achieved using BERT classification. Conceptually similar to Mixture-of-Experts (MoE) which lives within a model, this system selects the best entire model for the nature of the task.

As such, the overall inference accuracy is improved by using a pool of models that are better suited for different types of tasks:

Model Accuracy

The screenshot below shows the LLM Router dashboard in Grafana.

LLM Router Dashboard

The router is implemented in two ways:

  • Golang (with Rust FFI based on the candle rust ML framework)
  • Python Benchmarking will be conducted to determine the best implementation.

Auto-Selection of Tools

Select the tools to use based on the prompt, avoiding the use of tools that are not relevant to the prompt so as to reduce the number of prompt tokens and improve tool selection accuracy by the LLM.

Category-Specific System Prompts

Automatically inject specialized system prompts based on query classification, ensuring optimal model behavior for different domains (math, coding, business, etc.) without manual prompt engineering.

Enterprise Security 🔒

PII detection

Detect PII in the prompt, avoiding sending PII to the LLM so as to protect the privacy of the user.

Prompt guard

Detect if the prompt is a jailbreak prompt, avoiding sending jailbreak prompts to the LLM so as to prevent the LLM from misbehaving.

Similarity Caching ⚡️

Cache the semantic representation of the prompt so as to reduce the number of prompt tokens and improve the overall inference latency.

Distributed Tracing 🔍

Comprehensive observability with OpenTelemetry distributed tracing provides fine-grained visibility into the request processing pipeline.

vLLM Semantic Router Dashboard 💬

Watch the quick demo of the dashboard below:

Quick Start 🚀

Get up and running in seconds with our interactive setup script:

bash ./scripts/quickstart.sh

This command will:

  • 🔍 Check all prerequisites automatically
  • 📦 Install HuggingFace CLI if needed
  • 📥 Download all required AI models (~1.5GB)
  • 🐳 Start all Docker services
  • ⏳ Wait for services to become healthy
  • 🌐 Show you all the endpoints and next steps

For detailed installation and configuration instructions, see the Complete Documentation.

Documentation 📖

For comprehensive documentation including detailed setup instructions, architecture guides, and API references, visit:

👉 Complete Documentation at Read the Docs

The documentation includes:

Community 👋

For questions, feedback, or to contribute, please join #semantic-router channel in vLLM Slack.

Community Meetings 📅

We host bi-weekly community meetings to sync up with contributors across different time zones:

Join us to discuss the latest developments, share ideas, and collaborate on the project!

Citation

If you find Semantic Router helpful in your research or projects, please consider citing it:

@misc{semanticrouter2025,
  title={vLLM Semantic Router},
  author={vLLM Semantic Router Team},
  year={2025},
  howpublished={\url{https://github.com/vllm-project/semantic-router}},
}

Star History 🔥

We opened the project at Aug 31, 2025. We love open source and collaboration ❤️

Star History Chart

Morty Proxy This is a proxified and sanitized view of the page, visit original site.