Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

shameez-struggles-to-commit
Copy link
Contributor

This PR updates the VS Code Language Model provider metadata to reflect current model limits.

Highlights:

  • Updated context windows, prompt/input limits, and max output tokens for the VS Code LM provider entries where models match the available list.
  • GPT-5-mini now correctly uses a 264k context window and a max output of 127,805 tokens. GPT-5 & GPT-5-mini also bypass the 20% cap via existing budgeting logic to use their configured outputs.
  • Added a changeset for @roo-code/types so the metadata ships properly.

Why:

  • Prior metadata assumed a generic 128k context, which caused budgeting and UI inconsistencies.

Notes:

  • This change only adjusts metadata; no runtime logic was modified.
  • Type shapes remain the same.

Copy link

changeset-bot bot commented Oct 9, 2025

🦋 Changeset detected

Latest commit: 632e7f3

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
kilo-code Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@chrarnoldus
Copy link
Collaborator

What's the source of this data?

@shameez-struggles-to-commit
Copy link
Contributor Author

What's the source of this data?

pulled it from vscode - they have a debug chat window where you can see the details for each model offered:

Available Models (Raw API Response)

[
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4.1",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4.1",
    "is_chat_default": true,
    "is_chat_fallback": true,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "GPT-4.1",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest GPT-4.1 model from OpenAI. [Learn more about how GitHub Copilot serves GPT-4.1](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task#gpt-41)."
    },
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4.1-2025-04-14"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-5-mini",
      "limits": {
        "max_context_window_tokens": 264000,
        "max_output_tokens": 64000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-5-mini",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "lightweight",
    "model_picker_enabled": true,
    "name": "GPT-5 mini",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest GPT-5 mini model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5 mini](https://gh.io/copilot-openai)."
    },
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-5-mini"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "gpt-5",
      "limits": {
        "max_context_window_tokens": 264000,
        "max_output_tokens": 64000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-5",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "GPT-5",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest GPT-5 model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5](https://gh.io/copilot-openai)."
    },
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-5"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-3.5-turbo",
      "limits": {
        "max_context_window_tokens": 16384,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 12288
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "cl100k_base",
      "type": "chat"
    },
    "id": "gpt-3.5-turbo",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT 3.5 Turbo",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-3.5-turbo-0613"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-3.5-turbo",
      "limits": {
        "max_context_window_tokens": 16384,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 12288
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "cl100k_base",
      "type": "chat"
    },
    "id": "gpt-3.5-turbo-0613",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT 3.5 Turbo",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-3.5-turbo-0613"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o-mini",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 12288
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4o-mini",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4o mini",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-mini-2024-07-18"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o-mini",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 12288
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4o-mini-2024-07-18",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4o mini",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-mini-2024-07-18"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4",
      "limits": {
        "max_context_window_tokens": 32768,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 32768
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "cl100k_base",
      "type": "chat"
    },
    "id": "gpt-4",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT 4",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4-0613"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4",
      "limits": {
        "max_context_window_tokens": 32768,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 32768
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "cl100k_base",
      "type": "chat"
    },
    "id": "gpt-4-0613",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT 4",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4-0613"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4-turbo",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 64000
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "cl100k_base",
      "type": "chat"
    },
    "id": "gpt-4-0125-preview",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT 4 Turbo",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4-0125-preview"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 64000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4o",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "GPT-4o",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-2024-11-20"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 64000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4o-2024-11-20",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4o",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-2024-11-20"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 64000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4o-2024-05-13",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4o",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-2024-05-13"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 4096,
        "max_prompt_tokens": 64000
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4-o-preview",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4o",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-2024-05-13"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4o",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 64000
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4o-2024-08-06",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4o",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4o-2024-08-06"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 0.33
    },
    "capabilities": {
      "family": "o3-mini",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 100000,
        "max_prompt_tokens": 64000
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "o3-mini",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "lightweight",
    "model_picker_enabled": true,
    "name": "o3-mini",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "o3-mini-2025-01-31"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 0.33
    },
    "capabilities": {
      "family": "o3-mini",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 100000,
        "max_prompt_tokens": 64000
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "o3-mini-2025-01-31",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "o3-mini",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "o3-mini-2025-01-31"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 0.33
    },
    "capabilities": {
      "family": "o3-mini",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 100000,
        "max_prompt_tokens": 64000
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "o3-mini-paygo",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "o3-mini",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "o3-mini-paygo"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4.1",
      "object": "model_capabilities",
      "supports": {
        "streaming": true
      },
      "tokenizer": "o200k_base",
      "type": "completion"
    },
    "id": "gpt-41-copilot",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "GPT-4.1 Copilot",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-41-copilot"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "grok-code",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 64000,
        "max_prompt_tokens": 128000
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "grok-code-fast-1",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "powerful",
    "model_picker_enabled": true,
    "name": "Grok Code Fast 1 (Preview)",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Grok Code Fast 1 model from xAI. If enabled, you instruct GitHub Copilot to send data to xAI Grok Code Fast 1. [Learn more about how GitHub Copilot serves Grok Code Fast 1](https://docs.github.com/en/copilot/reference/ai-models/model-hosting#xai-models). During launch week, [promotional pricing is 0x](https://gh.io/copilot-grok-code-promo)."
    },
    "preview": true,
    "vendor": "xAI",
    "version": "grok-code-fast-1"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "gpt-5-codex",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 64000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-5-codex",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "powerful",
    "model_picker_enabled": true,
    "name": "GPT-5-Codex (Preview)",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest GPT-5-Codex model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5-Codex](https://gh.io/copilot-openai)."
    },
    "preview": true,
    "supported_endpoints": [
      "/responses"
    ],
    "vendor": "OpenAI",
    "version": "gpt-5-codex"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "text-embedding-ada-002",
      "limits": {
        "max_inputs": 512
      },
      "object": "model_capabilities",
      "supports": {},
      "tokenizer": "cl100k_base",
      "type": "embeddings"
    },
    "id": "text-embedding-ada-002",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "Embedding V2 Ada",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "text-embedding-3-small"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "text-embedding-3-small",
      "limits": {
        "max_inputs": 512
      },
      "object": "model_capabilities",
      "supports": {
        "dimensions": true
      },
      "tokenizer": "cl100k_base",
      "type": "embeddings"
    },
    "id": "text-embedding-3-small",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "Embedding V3 small",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "text-embedding-3-small"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "text-embedding-3-small",
      "object": "model_capabilities",
      "supports": {
        "dimensions": true
      },
      "tokenizer": "cl100k_base",
      "type": "embeddings"
    },
    "id": "text-embedding-3-small-inference",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "Embedding V3 small (Inference)",
    "object": "model",
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "text-embedding-3-small"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1
    },
    "capabilities": {
      "family": "claude-3.5-sonnet",
      "limits": {
        "max_context_window_tokens": 90000,
        "max_output_tokens": 8192,
        "max_prompt_tokens": 90000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "claude-3.5-sonnet",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "Claude Sonnet 3.5",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Claude 3.5 Sonnet model from Anthropic. [Learn more about how GitHub Copilot serves Claude 3.5 Sonnet](https://docs.github.com/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
    },
    "preview": false,
    "vendor": "Anthropic",
    "version": "claude-3.5-sonnet"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "claude-3.7-sonnet",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 90000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 5,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "claude-3.7-sonnet",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "Claude Sonnet 3.7",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Claude 3.7 Sonnet model from Anthropic. [Learn more about how GitHub Copilot serves Claude 3.7 Sonnet](https://docs.github.com/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
    },
    "preview": false,
    "vendor": "Anthropic",
    "version": "claude-3.7-sonnet"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1.25,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "claude-3.7-sonnet-thought",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 90000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "claude-3.7-sonnet-thought",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "powerful",
    "model_picker_enabled": true,
    "name": "Claude Sonnet 3.7 Thinking",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Claude 3.7 Sonnet model from Anthropic. [Learn more about how GitHub Copilot serves Claude 3.7 Sonnet](https://docs.github.com/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
    },
    "preview": false,
    "vendor": "Anthropic",
    "version": "claude-3.7-sonnet-thought"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "claude-sonnet-4",
      "limits": {
        "max_context_window_tokens": 216000,
        "max_output_tokens": 16000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 5,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "max_thinking_budget": 32000,
        "min_thinking_budget": 1024,
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "claude-sonnet-4",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "Claude Sonnet 4",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Claude Sonnet 4 model from Anthropic. [Learn more about how GitHub Copilot serves Claude Sonnet 4](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)."
    },
    "preview": false,
    "vendor": "Anthropic",
    "version": "claude-sonnet-4"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "claude-sonnet-4.5",
      "limits": {
        "max_context_window_tokens": 144000,
        "max_output_tokens": 16000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 5,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "claude-sonnet-4.5",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "Claude Sonnet 4.5",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Claude Sonnet 4.5 model from Anthropic. [Learn more about how GitHub Copilot serves Claude Sonnet 4.5](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)."
    },
    "preview": false,
    "vendor": "Anthropic",
    "version": "claude-sonnet-4.5"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 0.25
    },
    "capabilities": {
      "family": "gemini-2.0-flash",
      "limits": {
        "max_context_window_tokens": 1000000,
        "max_output_tokens": 8192,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/heic",
            "image/heif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "streaming": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gemini-2.0-flash-001",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "lightweight",
    "model_picker_enabled": true,
    "name": "Gemini 2.0 Flash",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Gemini models from Google. [Learn more about how GitHub Copilot serves Gemini 2.0 Flash](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)."
    },
    "preview": false,
    "vendor": "Google",
    "version": "gemini-2.0-flash-001"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 1,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "gemini-2.5-pro",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 64000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/heic",
            "image/heif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "max_thinking_budget": 32768,
        "min_thinking_budget": 128,
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gemini-2.5-pro",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "powerful",
    "model_picker_enabled": true,
    "name": "Gemini 2.5 Pro",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest Gemini 2.5 Pro model from Google. [Learn more about how GitHub Copilot serves Gemini 2.5 Pro](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task#gemini-25-pro)."
    },
    "preview": false,
    "vendor": "Google",
    "version": "gemini-2.5-pro"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 0.33
    },
    "capabilities": {
      "family": "o4-mini",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "o4-mini",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "lightweight",
    "model_picker_enabled": true,
    "name": "o4-mini (Preview)",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest o4-mini model from OpenAI. [Learn more about how GitHub Copilot serves o4-mini](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-openai-o4-mini-in-github-copilot)."
    },
    "preview": true,
    "vendor": "Azure OpenAI",
    "version": "o4-mini-2025-04-16"
  },
  {
    "billing": {
      "is_premium": true,
      "multiplier": 0.33
    },
    "capabilities": {
      "family": "o4-mini",
      "limits": {
        "max_context_window_tokens": 200000,
        "max_output_tokens": 100000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "o4-mini-2025-04-16",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "o4-mini (Preview)",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest o4-mini model from OpenAI. [Learn more about how GitHub Copilot serves o4-mini](https://docs.github.com/en/copilot/using-github-copilot/ai-models/using-openai-o4-mini-in-github-copilot)."
    },
    "preview": true,
    "vendor": "OpenAI",
    "version": "o4-mini-2025-04-16"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "gpt-4.1",
      "limits": {
        "max_context_window_tokens": 128000,
        "max_output_tokens": 16384,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "gpt-4.1-2025-04-14",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_enabled": false,
    "name": "GPT-4.1",
    "object": "model",
    "policy": {
      "state": "enabled",
      "terms": "Enable access to the latest GPT-4.1 model from OpenAI. [Learn more about how GitHub Copilot serves GPT-4.1](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task#gpt-41)."
    },
    "preview": false,
    "vendor": "Azure OpenAI",
    "version": "gpt-4.1-2025-04-14"
  },
  {
    "billing": {
      "is_premium": false,
      "multiplier": 0
    },
    "capabilities": {
      "family": "oswe-vscode",
      "limits": {
        "max_context_window_tokens": 264000,
        "max_output_tokens": 64000,
        "max_prompt_tokens": 200000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 1,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp",
            "image/gif"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "parallel_tool_calls": true,
        "streaming": true,
        "structured_outputs": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "oswe-vscode-prime",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "versatile",
    "model_picker_enabled": true,
    "name": "Copilot SWE (Preview)",
    "object": "model",
    "preview": true,
    "vendor": "Azure OpenAI",
    "version": "copilot-swe"
  }
]

Summary

Total models     : 35
Chat models      : 31
Completion models: 1
Premium models   : 14
Preview models   : 5
Default chat     : gpt-4.1
Fallback chat    : gpt-4.1

.changeset/update-vscode-lm-models.md Outdated Show resolved Hide resolved
@chrarnoldus chrarnoldus merged commit 49c2cb8 into Kilo-Org:main Oct 15, 2025
11 checks passed
@shameez-struggles-to-commit shameez-struggles-to-commit deleted the fix/vscode-lm-gpt5-mini-output branch October 15, 2025 14:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.