Skip to main content

Documentation Index

Fetch the complete documentation index at: https://opensre.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

What is OpenSRE?

OpenSRE is an open-source framework for building AI SRE agents that investigate production incidents using your existing observability stack, cloud context, and runbooks.

How do I get started quickly?

Install OpenSRE, run onboarding, then investigate a sample alert:
opensre onboard
opensre investigate -i tests/e2e/kubernetes/fixtures/datadog_k8s_alert.json

Which deployment path is recommended?

Can I self-host OpenSRE?

Yes. OpenSRE is based on LangGraph, so you can self-host it on your own infrastructure using the LangGraph runtime. Before deploying, set LLM_PROVIDER and the matching provider key (for example ANTHROPIC_API_KEY when LLM_PROVIDER=anthropic).

Which model providers are supported?

OpenSRE supports multiple providers, including Anthropic, OpenAI, OpenRouter, and Gemini via LLM_PROVIDER plus the matching API key. Additional providers and overrides are documented in .env.example.

Does OpenSRE work with our existing tools?

Usually yes. OpenSRE integrates with 60+ systems across observability, cloud, incident management, data platforms, and collaboration tools. See the Integrations section in docs for connector-specific setup steps.

What happens when I run OpenSRE with no command?

Running opensre starts an interactive incident-response shell where you can describe issues in plain language, stream investigations live, and ask grounded follow-up questions in the same session.

How is security and telemetry handled?

OpenSRE is designed for security-sensitive environments and uses structured, auditable workflows. Anonymous telemetry can be disabled with OPENSRE_NO_TELEMETRY=1. For vulnerability reports, email support@opensre.com.
Morty Proxy This is a proxified and sanitized view of the page, visit original site.