-
Notifications
You must be signed in to change notification settings - Fork 1.2k
VS Code LM: refresh model limits; GPT-5-mini max output = 127,805 #2885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
chrarnoldus
merged 6 commits into
Kilo-Org:main
from
shameez-struggles-to-commit:fix/vscode-lm-gpt5-mini-output
Oct 15, 2025
Merged
VS Code LM: refresh model limits; GPT-5-mini max output = 127,805 #2885
chrarnoldus
merged 6 commits into
Kilo-Org:main
from
shameez-struggles-to-commit:fix/vscode-lm-gpt5-mini-output
Oct 15, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
🦋 Changeset detectedLatest commit: 632e7f3 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
What's the source of this data? |
pulled it from vscode - they have a debug chat window where you can see the details for each model offered: Available Models (Raw API Response)[
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4.1",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 16384,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4.1",
"is_chat_default": true,
"is_chat_fallback": true,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "GPT-4.1",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest GPT-4.1 model from OpenAI. [Learn more about how GitHub Copilot serves GPT-4.1](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task#gpt-41)."
},
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4.1-2025-04-14"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-5-mini",
"limits": {
"max_context_window_tokens": 264000,
"max_output_tokens": 64000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-5-mini",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "lightweight",
"model_picker_enabled": true,
"name": "GPT-5 mini",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest GPT-5 mini model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5 mini](https://gh.io/copilot-openai)."
},
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-5-mini"
},
{
"billing": {
"is_premium": true,
"multiplier": 1,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "gpt-5",
"limits": {
"max_context_window_tokens": 264000,
"max_output_tokens": 64000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-5",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "GPT-5",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest GPT-5 model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5](https://gh.io/copilot-openai)."
},
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-5"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-3.5-turbo",
"limits": {
"max_context_window_tokens": 16384,
"max_output_tokens": 4096,
"max_prompt_tokens": 12288
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"tool_calls": true
},
"tokenizer": "cl100k_base",
"type": "chat"
},
"id": "gpt-3.5-turbo",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT 3.5 Turbo",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-3.5-turbo-0613"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-3.5-turbo",
"limits": {
"max_context_window_tokens": 16384,
"max_output_tokens": 4096,
"max_prompt_tokens": 12288
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"tool_calls": true
},
"tokenizer": "cl100k_base",
"type": "chat"
},
"id": "gpt-3.5-turbo-0613",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT 3.5 Turbo",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-3.5-turbo-0613"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o-mini",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 4096,
"max_prompt_tokens": 12288
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4o-mini",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4o mini",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-mini-2024-07-18"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o-mini",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 4096,
"max_prompt_tokens": 12288
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4o-mini-2024-07-18",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4o mini",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-mini-2024-07-18"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4",
"limits": {
"max_context_window_tokens": 32768,
"max_output_tokens": 4096,
"max_prompt_tokens": 32768
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"tool_calls": true
},
"tokenizer": "cl100k_base",
"type": "chat"
},
"id": "gpt-4",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT 4",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4-0613"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4",
"limits": {
"max_context_window_tokens": 32768,
"max_output_tokens": 4096,
"max_prompt_tokens": 32768
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"tool_calls": true
},
"tokenizer": "cl100k_base",
"type": "chat"
},
"id": "gpt-4-0613",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT 4",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4-0613"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4-turbo",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 4096,
"max_prompt_tokens": 64000
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true
},
"tokenizer": "cl100k_base",
"type": "chat"
},
"id": "gpt-4-0125-preview",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT 4 Turbo",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4-0125-preview"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 4096,
"max_prompt_tokens": 64000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4o",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "GPT-4o",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-2024-11-20"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 16384,
"max_prompt_tokens": 64000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4o-2024-11-20",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4o",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-2024-11-20"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 4096,
"max_prompt_tokens": 64000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4o-2024-05-13",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4o",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-2024-05-13"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 4096,
"max_prompt_tokens": 64000
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4-o-preview",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4o",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-2024-05-13"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4o",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 16384,
"max_prompt_tokens": 64000
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4o-2024-08-06",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4o",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4o-2024-08-06"
},
{
"billing": {
"is_premium": true,
"multiplier": 0.33
},
"capabilities": {
"family": "o3-mini",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 100000,
"max_prompt_tokens": 64000
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"structured_outputs": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "o3-mini",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "lightweight",
"model_picker_enabled": true,
"name": "o3-mini",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "o3-mini-2025-01-31"
},
{
"billing": {
"is_premium": true,
"multiplier": 0.33
},
"capabilities": {
"family": "o3-mini",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 100000,
"max_prompt_tokens": 64000
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"structured_outputs": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "o3-mini-2025-01-31",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "o3-mini",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "o3-mini-2025-01-31"
},
{
"billing": {
"is_premium": true,
"multiplier": 0.33
},
"capabilities": {
"family": "o3-mini",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 100000,
"max_prompt_tokens": 64000
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"structured_outputs": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "o3-mini-paygo",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "o3-mini",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "o3-mini-paygo"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4.1",
"object": "model_capabilities",
"supports": {
"streaming": true
},
"tokenizer": "o200k_base",
"type": "completion"
},
"id": "gpt-41-copilot",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "GPT-4.1 Copilot",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-41-copilot"
},
{
"billing": {
"is_premium": false,
"multiplier": 0,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "grok-code",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 64000,
"max_prompt_tokens": 128000
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"structured_outputs": true,
"tool_calls": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "grok-code-fast-1",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "powerful",
"model_picker_enabled": true,
"name": "Grok Code Fast 1 (Preview)",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Grok Code Fast 1 model from xAI. If enabled, you instruct GitHub Copilot to send data to xAI Grok Code Fast 1. [Learn more about how GitHub Copilot serves Grok Code Fast 1](https://docs.github.com/en/copilot/reference/ai-models/model-hosting#xai-models). During launch week, [promotional pricing is 0x](https://gh.io/copilot-grok-code-promo)."
},
"preview": true,
"vendor": "xAI",
"version": "grok-code-fast-1"
},
{
"billing": {
"is_premium": true,
"multiplier": 1,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "gpt-5-codex",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 64000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-5-codex",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "powerful",
"model_picker_enabled": true,
"name": "GPT-5-Codex (Preview)",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest GPT-5-Codex model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5-Codex](https://gh.io/copilot-openai)."
},
"preview": true,
"supported_endpoints": [
"/responses"
],
"vendor": "OpenAI",
"version": "gpt-5-codex"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "text-embedding-ada-002",
"limits": {
"max_inputs": 512
},
"object": "model_capabilities",
"supports": {},
"tokenizer": "cl100k_base",
"type": "embeddings"
},
"id": "text-embedding-ada-002",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "Embedding V2 Ada",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "text-embedding-3-small"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "text-embedding-3-small",
"limits": {
"max_inputs": 512
},
"object": "model_capabilities",
"supports": {
"dimensions": true
},
"tokenizer": "cl100k_base",
"type": "embeddings"
},
"id": "text-embedding-3-small",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "Embedding V3 small",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "text-embedding-3-small"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "text-embedding-3-small",
"object": "model_capabilities",
"supports": {
"dimensions": true
},
"tokenizer": "cl100k_base",
"type": "embeddings"
},
"id": "text-embedding-3-small-inference",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "Embedding V3 small (Inference)",
"object": "model",
"preview": false,
"vendor": "Azure OpenAI",
"version": "text-embedding-3-small"
},
{
"billing": {
"is_premium": true,
"multiplier": 1
},
"capabilities": {
"family": "claude-3.5-sonnet",
"limits": {
"max_context_window_tokens": 90000,
"max_output_tokens": 8192,
"max_prompt_tokens": 90000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "claude-3.5-sonnet",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "Claude Sonnet 3.5",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Claude 3.5 Sonnet model from Anthropic. [Learn more about how GitHub Copilot serves Claude 3.5 Sonnet](https://docs.github.com/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
},
"preview": false,
"vendor": "Anthropic",
"version": "claude-3.5-sonnet"
},
{
"billing": {
"is_premium": true,
"multiplier": 1,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "claude-3.7-sonnet",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 16384,
"max_prompt_tokens": 90000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 5,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "claude-3.7-sonnet",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "Claude Sonnet 3.7",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Claude 3.7 Sonnet model from Anthropic. [Learn more about how GitHub Copilot serves Claude 3.7 Sonnet](https://docs.github.com/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
},
"preview": false,
"vendor": "Anthropic",
"version": "claude-3.7-sonnet"
},
{
"billing": {
"is_premium": true,
"multiplier": 1.25,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "claude-3.7-sonnet-thought",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 16384,
"max_prompt_tokens": 90000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp"
]
}
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "claude-3.7-sonnet-thought",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "powerful",
"model_picker_enabled": true,
"name": "Claude Sonnet 3.7 Thinking",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Claude 3.7 Sonnet model from Anthropic. [Learn more about how GitHub Copilot serves Claude 3.7 Sonnet](https://docs.github.com/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
},
"preview": false,
"vendor": "Anthropic",
"version": "claude-3.7-sonnet-thought"
},
{
"billing": {
"is_premium": true,
"multiplier": 1,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "claude-sonnet-4",
"limits": {
"max_context_window_tokens": 216000,
"max_output_tokens": 16000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 5,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp"
]
}
},
"object": "model_capabilities",
"supports": {
"max_thinking_budget": 32000,
"min_thinking_budget": 1024,
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "claude-sonnet-4",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "Claude Sonnet 4",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Claude Sonnet 4 model from Anthropic. [Learn more about how GitHub Copilot serves Claude Sonnet 4](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)."
},
"preview": false,
"vendor": "Anthropic",
"version": "claude-sonnet-4"
},
{
"billing": {
"is_premium": true,
"multiplier": 1,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "claude-sonnet-4.5",
"limits": {
"max_context_window_tokens": 144000,
"max_output_tokens": 16000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 5,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "claude-sonnet-4.5",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "Claude Sonnet 4.5",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Claude Sonnet 4.5 model from Anthropic. [Learn more about how GitHub Copilot serves Claude Sonnet 4.5](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)."
},
"preview": false,
"vendor": "Anthropic",
"version": "claude-sonnet-4.5"
},
{
"billing": {
"is_premium": true,
"multiplier": 0.25
},
"capabilities": {
"family": "gemini-2.0-flash",
"limits": {
"max_context_window_tokens": 1000000,
"max_output_tokens": 8192,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/heic",
"image/heif"
]
}
},
"object": "model_capabilities",
"supports": {
"streaming": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gemini-2.0-flash-001",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "lightweight",
"model_picker_enabled": true,
"name": "Gemini 2.0 Flash",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Gemini models from Google. [Learn more about how GitHub Copilot serves Gemini 2.0 Flash](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)."
},
"preview": false,
"vendor": "Google",
"version": "gemini-2.0-flash-001"
},
{
"billing": {
"is_premium": true,
"multiplier": 1,
"restricted_to": [
"pro",
"pro_plus",
"max",
"business",
"enterprise"
]
},
"capabilities": {
"family": "gemini-2.5-pro",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 64000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/heic",
"image/heif"
]
}
},
"object": "model_capabilities",
"supports": {
"max_thinking_budget": 32768,
"min_thinking_budget": 128,
"parallel_tool_calls": true,
"streaming": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gemini-2.5-pro",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "powerful",
"model_picker_enabled": true,
"name": "Gemini 2.5 Pro",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest Gemini 2.5 Pro model from Google. [Learn more about how GitHub Copilot serves Gemini 2.5 Pro](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task#gemini-25-pro)."
},
"preview": false,
"vendor": "Google",
"version": "gemini-2.5-pro"
},
{
"billing": {
"is_premium": true,
"multiplier": 0.33
},
"capabilities": {
"family": "o4-mini",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 16384,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "o4-mini",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "lightweight",
"model_picker_enabled": true,
"name": "o4-mini (Preview)",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest o4-mini model from OpenAI. [Learn more about how GitHub Copilot serves o4-mini](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-openai-o4-mini-in-github-copilot)."
},
"preview": true,
"vendor": "Azure OpenAI",
"version": "o4-mini-2025-04-16"
},
{
"billing": {
"is_premium": true,
"multiplier": 0.33
},
"capabilities": {
"family": "o4-mini",
"limits": {
"max_context_window_tokens": 200000,
"max_output_tokens": 100000,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "o4-mini-2025-04-16",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "o4-mini (Preview)",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest o4-mini model from OpenAI. [Learn more about how GitHub Copilot serves o4-mini](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-openai-o4-mini-in-github-copilot)."
},
"preview": true,
"vendor": "OpenAI",
"version": "o4-mini-2025-04-16"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "gpt-4.1",
"limits": {
"max_context_window_tokens": 128000,
"max_output_tokens": 16384,
"max_prompt_tokens": 128000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "gpt-4.1-2025-04-14",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_enabled": false,
"name": "GPT-4.1",
"object": "model",
"policy": {
"state": "enabled",
"terms": "Enable access to the latest GPT-4.1 model from OpenAI. [Learn more about how GitHub Copilot serves GPT-4.1](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task#gpt-41)."
},
"preview": false,
"vendor": "Azure OpenAI",
"version": "gpt-4.1-2025-04-14"
},
{
"billing": {
"is_premium": false,
"multiplier": 0
},
"capabilities": {
"family": "oswe-vscode",
"limits": {
"max_context_window_tokens": 264000,
"max_output_tokens": 64000,
"max_prompt_tokens": 200000,
"vision": {
"max_prompt_image_size": 3145728,
"max_prompt_images": 1,
"supported_media_types": [
"image/jpeg",
"image/png",
"image/webp",
"image/gif"
]
}
},
"object": "model_capabilities",
"supports": {
"parallel_tool_calls": true,
"streaming": true,
"structured_outputs": true,
"tool_calls": true,
"vision": true
},
"tokenizer": "o200k_base",
"type": "chat"
},
"id": "oswe-vscode-prime",
"is_chat_default": false,
"is_chat_fallback": false,
"model_picker_category": "versatile",
"model_picker_enabled": true,
"name": "Copilot SWE (Preview)",
"object": "model",
"preview": true,
"vendor": "Azure OpenAI",
"version": "copilot-swe"
}
] Summary
|
chrarnoldus
reviewed
Oct 15, 2025
chrarnoldus
approved these changes
Oct 15, 2025
Merged
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR updates the VS Code Language Model provider metadata to reflect current model limits.
Highlights:
Why:
Notes: