Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 472e8c1

Browse filesBrowse files
authored
Docs for LiteLLM integration (#532)
1 parent bd404e0 commit 472e8c1
Copy full SHA for 472e8c1

File tree

Expand file treeCollapse file tree

7 files changed

+90
-9
lines changed
Filter options
Expand file treeCollapse file tree

7 files changed

+90
-9
lines changed

‎docs/ja/quickstart.md

Copy file name to clipboardExpand all lines: docs/ja/quickstart.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,4 +186,4 @@ if __name__ == "__main__":
186186

187187
- [エージェント](agents.md) の設定方法について学ぶ
188188
- [エージェントの実行](running_agents.md) について学ぶ
189-
- [tools](tools.md)[guardrails](guardrails.md)[models](models.md) について学ぶ
189+
- [tools](tools.md)[guardrails](guardrails.md)[models](models/index.md) について学ぶ

‎docs/models.md renamed to ‎docs/models/index.md

Copy file name to clipboardExpand all lines: docs/models/index.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,9 +70,9 @@ You can use other LLM providers in 3 ways (examples [here](https://github.com/op
7070

7171
1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
7272
2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
73-
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py).
73+
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). An easy way to use most available models is via the [LiteLLM integration](./litellm.md).
7474

75-
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](tracing.md).
75+
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md).
7676

7777
!!! note
7878

@@ -86,7 +86,7 @@ If you get errors related to tracing, this is because traces are uploaded to Ope
8686

8787
1. Disable tracing entirely: [`set_tracing_disabled(True)`][agents.set_tracing_disabled].
8888
2. Set an OpenAI key for tracing: [`set_tracing_export_api_key(...)`][agents.set_tracing_export_api_key]. This API key will only be used for uploading traces, and must be from [platform.openai.com](https://platform.openai.com/).
89-
3. Use a non-OpenAI trace processor. See the [tracing docs](tracing.md#custom-tracing-processors).
89+
3. Use a non-OpenAI trace processor. See the [tracing docs](../tracing.md#custom-tracing-processors).
9090

9191
### Responses API support
9292

‎docs/models/litellm.md

Copy file name to clipboard
+73Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# Using any model via LiteLLM
2+
3+
!!! note
4+
5+
The LiteLLM integration is in beta. You may run into issues with some model providers, especially smaller ones. Please report any issues via [Github issues](https://github.com/openai/openai-agents-python/issues) and we'll fix quickly.
6+
7+
[LiteLLM](https://docs.litellm.ai/docs/) is a library that allows you to use 100+ models via a single interface. We've added a LiteLLM integration to allow you to use any AI model in the Agents SDK.
8+
9+
## Setup
10+
11+
You'll need to ensure `litellm` is available. You can do this by installing the optional `litellm` dependency group:
12+
13+
```bash
14+
pip install "openai-agents[litellm]"
15+
```
16+
17+
Once done, you can use [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel] in any agent.
18+
19+
## Example
20+
21+
This is a fully working example. When you run it, you'll be prompted for a model name and API key. For example, you could enter:
22+
23+
- `openai/gpt-4.1` for the model, and your OpenAI API key
24+
- `anthropic/claude-3-5-sonnet-20240620` for the model, and your Anthropic API key
25+
- etc
26+
27+
For a full list of models supported in LiteLLM, see the [litellm providers docs](https://docs.litellm.ai/docs/providers).
28+
29+
```python
30+
from __future__ import annotations
31+
32+
import asyncio
33+
34+
from agents import Agent, Runner, function_tool, set_tracing_disabled
35+
from agents.extensions.models.litellm_model import LitellmModel
36+
37+
@function_tool
38+
def get_weather(city: str):
39+
print(f"[debug] getting weather for {city}")
40+
return f"The weather in {city} is sunny."
41+
42+
43+
async def main(model: str, api_key: str):
44+
agent = Agent(
45+
name="Assistant",
46+
instructions="You only respond in haikus.",
47+
model=LitellmModel(model=model, api_key=api_key),
48+
tools=[get_weather],
49+
)
50+
51+
result = await Runner.run(agent, "What's the weather in Tokyo?")
52+
print(result.final_output)
53+
54+
55+
if __name__ == "__main__":
56+
# First try to get model/api key from args
57+
import argparse
58+
59+
parser = argparse.ArgumentParser()
60+
parser.add_argument("--model", type=str, required=False)
61+
parser.add_argument("--api-key", type=str, required=False)
62+
args = parser.parse_args()
63+
64+
model = args.model
65+
if not model:
66+
model = input("Enter a model name for Litellm: ")
67+
68+
api_key = args.api_key
69+
if not api_key:
70+
api_key = input("Enter an API key for Litellm: ")
71+
72+
asyncio.run(main(model, api_key))
73+
```

‎docs/quickstart.md

Copy file name to clipboardExpand all lines: docs/quickstart.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,4 +186,4 @@ Learn how to build more complex agentic flows:
186186

187187
- Learn about how to configure [Agents](agents.md).
188188
- Learn about [running agents](running_agents.md).
189-
- Learn about [tools](tools.md), [guardrails](guardrails.md) and [models](models.md).
189+
- Learn about [tools](tools.md), [guardrails](guardrails.md) and [models](models/index.md).

‎docs/ref/extensions/litellm.md

Copy file name to clipboard
+3Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `LiteLLM Models`
2+
3+
::: agents.extensions.models.litellm_model

‎mkdocs.yml

Copy file name to clipboardExpand all lines: mkdocs.yml
+4-1Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,9 @@ plugins:
6666
- context.md
6767
- guardrails.md
6868
- multi_agent.md
69-
- models.md
69+
- Models:
70+
- models/index.md
71+
- models/litellm.md
7072
- config.md
7173
- visualization.md
7274
- Voice agents:
@@ -123,6 +125,7 @@ plugins:
123125
- Extensions:
124126
- ref/extensions/handoff_filters.md
125127
- ref/extensions/handoff_prompt.md
128+
- ref/extensions/litellm.md
126129

127130
- locale: ja
128131
name: 日本語

‎src/agents/extensions/models/litellm_model.py

Copy file name to clipboardExpand all lines: src/agents/extensions/models/litellm_model.py
+5-3Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,11 @@
4747

4848

4949
class LitellmModel(Model):
50+
"""This class enables using any model via LiteLLM. LiteLLM allows you to acess OpenAPI,
51+
Anthropic, Gemini, Mistral, and many other models.
52+
See supported models here: [litellm models](https://docs.litellm.ai/docs/providers).
53+
"""
54+
5055
def __init__(
5156
self,
5257
model: str,
@@ -140,9 +145,6 @@ async def stream_response(
140145
*,
141146
previous_response_id: str | None,
142147
) -> AsyncIterator[TResponseStreamEvent]:
143-
"""
144-
Yields a partial message as it is generated, as well as the usage information.
145-
"""
146148
with generation_span(
147149
model=str(self.model),
148150
model_config=dataclasses.asdict(model_settings)

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.