Deployable Agentic AI examples using LangChain (orchestration) + OpenRouter (LLMs) + FastMCP (MCP tools) + OpenWebUI (front end) stack.
- Git
- Docker and Docker Compose
- API keys for your chosen OpenAI compatible endpoint provider e.g., OpenRouter, Cerebras, NVIDIA, Hugging Face etc.
By default the project uses OpenRouter.ai with access to hundreds of LLM endpoints. You will need to sign up for an account, and API key is accessible under your profile "Keys".
Other OpenAI-compatible endpoints can also be used e.g., Cerebras, Hugging Face, OpenAI, NVIDIA NIM, so long as the chosen model supports tool calling.
agentmcp_examples/
├── compose.yaml # Docker Compose orchestration
├── data_api/ # Mock data provider (FastAPI)
├── mcp_servers/ # MCP servers
│ ├── weather_server/ # Weather information MCP
│ ├── math_server/ # Mathematical calculations MCP
│ └── data_tool_server/ # Data analytics and insights MCP
└── agent_backends/
└── react_simple/ # Main agent backend (LLM + Langchain + MCP)
git clone https://github.com/edlee123/agentmcp_examples.git
cd agentmcp_examples
The default is using OPENROUTER_API_KEY. To use other OpenAI compatible endpoints please refer to Customizing the LLM
# Set your API key as an environment variable
export OPENROUTER_API_KEY="sk-or-v1-your-openrouter-key"
This environment variable is passed into the agent-backend service (see compose.react_simple.yaml
). Make sure this environment variable is set before the next step.
docker compose -f compose.react_simple.yaml up -d
# To see the available services:
docker ps -a
Navigate to https://localhost:3000 to see the UI.
- Purpose: Provides mock data for various use cases
- Technology: FastAPI
- Endpoints:
/users
,/products
,/documents
,/search
- Purpose: Coordinates between LLM and MCP services
- Technology: FastAPI + Langchain + OpenAI-compatible providers
- Location:
agent_backends/react_simple/
- Features: Tool detection, LLM integration, health monitoring, streaming responses
- API Compatibility: OpenAI-compatible endpoints for seamless integration
- Purpose: Handles weather-related queries
- Technology: FastMCP
- Transport: SSE (Server-Sent Events)
- Purpose: Performs mathematical calculations
- Technology: FastMCP
- Operations: Addition, multiplication
- Purpose: Provides data analytics and insights
- Technology: FastMCP
- Features: User statistics, product analytics, document insights, data health checks
- Transport: SSE (Server-Sent Events)
- Purpose: Modern chat interface with advanced features
- Technology: OpenWebUI (ChatGPT-like web interface)
- Features:
- Real-time streaming responses
- Supports voice-to-text out of the box.
- Multi-endpoints model selection.
- Authorization + user management.
- Chat history.
- Automatic rendering of markdown or html artifacts.
- Code interpreter with Python execution
- Document or image upload
- Chat history and conversation management
- Markdown and code syntax highlighting
Test agent backend health:
curl http://localhost:9002/health
Show list of configured models (selectable in the UI):
curl http://localhost:9002/v1/models
Expected response:
{"object":"list","data":[{"id":"anthropic/claude-3.5-sonnet",...}]}
- Open http://localhost:3000
- The interface will load with your configured model
- Try example queries in the chat:
- "What's the weather in Tokyo?"
- "Calculate 25 * 8"
- "What is 100 + 50?"
- "Tell me about the users in the system"
- "What products do we have?"
- "Show me document statistics"
- "Create a bar chart showing product sales data" (uses OpenWebUI artifacts to render HTML)
export LLM_MODEL="anthropic/claude-3.5-sonnet"
# Test streaming chat completions with your configured model
curl -X POST http://localhost:9002/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d "{
\"model\": \"$LLM_MODEL\",
\"messages\": [{\"role\": \"user\", \"content\": \"What is 15 + 27?\"}],
\"stream\": true
}"
# Test non-streaming responses
curl -X POST http://localhost:9002/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d "{
\"model\": \"$LLM_MODEL\",
\"messages\": [{\"role\": \"user\", \"content\": \"What is the weather in New York?\"}],
\"stream\": false
}"
# Test models endpoint (used by OpenWebUI)
curl http://localhost:9002/v1/models
# Test API key validation (should return warning message)
curl -X POST http://localhost:9002/v1/chat/completions \
-H "Content-Type: application/json" \
-d "{
\"model\": \"$LLM_MODEL\",
\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}],
\"stream\": true
}"
# Test health checks
curl http://localhost:9002/health
curl http://localhost:9001/health
# Test data API endpoints
curl http://localhost:9001/users
curl http://localhost:9001/products
curl http://localhost:9001/documents
curl http://localhost:9001/search?q=technology
- Create new directory in
mcp_servers/
- Add server Dockerfile and requirements.txt
- Implement MCP server using FastMCP
- Update compose file e.g.,
compose.react_simple.yaml
to include new service
The Data Tool MCP Server mcp_servers/data_tool_server/data_tool.py
provides a template for creating new MCP servers:
# Initialize FastMCP server
mcp = FastMCP("Data Tool Server", host="0.0.0.0", port=8080)
# Define a helper function for data API calls
async def call_data_api(endpoint: str, params: Optional[Dict] = None) -> Dict[str, Any]:
try:
async with httpx.AsyncClient() as client:
url = f"{API_BASE_URL}{endpoint}"
response = await client.get(url, params=params or {})
response.raise_for_status()
return response.json()
except Exception as e:
logger.error(f"Error calling API {endpoint}: {e}")
return {"error": str(e)}
# Define a tool with required parameters
@mcp.tool()
async def my_tool(query_type: str) -> ToolResult:
"""
Tool description here.
Args:
query_type: Type of query to perform (e.g., "all", "summary", "details")
Returns:
Tool results description
"""
try:
# Call external API
data = await call_data_api("/endpoint")
# Process data
result = process_data(data)
# Return structured result
return ToolResult(structured_content=result)
except Exception as e:
return ToolResult(structured_content={"error": str(e)})
Important Tips:
- Always use required and typed parameters instead of optional parameters with default values. This because the MCP arg_schema validation will be based off these definitions.
- Use descriptive parameter names (e.g.,
query_type
,check_type
) - Return results using
ToolResult
with structured content - Handle API responses that might be lists or dictionaries
- Include proper error handling
To add or change LLMs available to the system, update the agent_backends/react_simple/config.yaml
file. Each entry in the models
section defines a model, its provider, and the environment variable used for its API key. For example:
models:
- id: "anthropic/claude-3.5-sonnet"
provider: "openrouter"
api_key_env: "OPENROUTER_API_KEY"
- id: "openai/gpt-4o"
provider: "openai"
api_key_env: "OPENAI_API_KEY"
For each model you add, add corresponding API key environment variable to the compose file e.g. compose.react_simple.yaml
in the environment:
section for the agent-backend
service:
services:
...
agent-backend:
environment:
- OPENROUTER_API_KEY=sk-or-v1-your-openrouter-key
- OPENAI_API_KEY=sk-your-openai-key
Important: All models listed in config.yaml
must support tool calling (function calling) for the system to work correctly. If a model does not support tool calls, it will not be able to use the MCP tools and may cause errors.
After updating config.yaml
and environment variables, restart the agent-backend service:
docker compose -f compose.react_simple.yaml restart agent-backend
You can now select the newly available models in OpenWebUI.
- Orchestration API:
GET /health
- Data API:
GET /health
- Individual Services: Check via Docker logs
The Data API provides a comprehensive health check endpoint that returns:
- Service status
- API version
- Data counts (users, products, documents)
- Timestamp information
To view logs:
docker compose logs
docker compose logs agent-backend
# View all logs
docker compose -f compose.react_simple.yaml logs
# View specific service logs
docker compose -f compose.react_simple.yaml logs agent-backend
This template is designed for prototyping Agent apps. Feel free to:
- Add new MCP servers and tools
- Add mock data apis to create new use cases.
- Add new agent back ends
- Add additional LLM providers
- Customize the look and feel of OpenWebUI.
Services not starting:
docker compose -f compose.react_simple.yaml down
docker compose -f compose.react_simple.yaml build --no-cache
docker compose -f compose.react_simple.yaml up -d
MCP servers not responding:
- Check if ports are available
- Verify MCP server logs:
docker compose -f compose.react_simple.yaml logs weather-mcp
- For Data Tool MCP:
docker compose -f compose.react_simple.yaml logs data-tool-mcp
- OpenWebUI Documentation
- FastMCP Documentation
- Langchain Documentation
- OpenRouter API
- Docker Compose Reference
Happy Coding with AgentMCP! 🎉