Skip to main content
Select language: current language is Spanish
Buscar o preguntar a Copilot
Abrir menú

Using your own LLM models in GitHub Copilot CLI

Use a model from an external provider of your choice in Copilot by supplying your own API key.

You can configure CLI de Copilot to use your own LLM provider, also called BYOK (Bring Your Own Key), instead of GitHub-hosted models. This lets you connect to OpenAI-compatible endpoints, Azure OpenAI, or Anthropic, including locally running models such as Ollama.

Prerequisites

  • CLI de Copilot is installed. See Instalación de GitHub Copilot CLI.
  • You have an API key from a supported LLM provider, or you have a local model running (such as Ollama).

Supported providers

CLI de Copilot supports three provider types:

Provider typeCompatible services
openaiOpenAI, Ollama, vLLM, Foundry Local, and any other OpenAI Chat Completions API-compatible endpoint. This is the default provider type.
azureAzure OpenAI Service.
anthropicAnthropic (Claude models).

For additional examples, run copilot help providers in your terminal.

Model requirements

Models must support tool calling (also called function calling) and streaming. If a model does not support either capability, CLI de Copilot returns an error. For best results, use a model with a context window of at least 128k tokens.

Configuring your provider

You configure your model provider by setting environment variables before starting CLI de Copilot.

Environment variableRequiredDescription
COPILOT_PROVIDER_BASE_URLYesThe base URL of your model provider's API endpoint.
COPILOT_PROVIDER_TYPENoThe provider type: openai (default), azure, or anthropic.
COPILOT_PROVIDER_API_KEYNoYour API key for the provider. Not required for providers that do not use authentication, such as a local Ollama instance.
COPILOT_MODELYesThe model identifier to use. You can also set this with the --model command-line flag.

Connecting to an OpenAI-compatible endpoint

Use the following steps if you are connecting to OpenAI, Ollama, vLLM, Foundry Local, or any other endpoint that is compatible with the OpenAI Chat Completions API.

  1. Set environment variables for your provider. For example, for a local Ollama instance:

    export COPILOT_PROVIDER_BASE_URL=http://localhost:11434
    export COPILOT_MODEL=YOUR-MODEL-NAME
    

    Replace YOUR-MODEL-NAME with the name of the model you have pulled in Ollama (for example, llama3.2).

  2. For a remote OpenAI endpoint, also set your API key.

    export COPILOT_PROVIDER_BASE_URL=https://api.openai.com
    export COPILOT_PROVIDER_API_KEY=YOUR-OPENAI-API-KEY
    export COPILOT_MODEL=YOUR-MODEL-NAME
    

    Replace YOUR-OPENAI-API-KEY with your OpenAI API key and YOUR-MODEL-NAME with the model you want to use (for example, gpt-4o).

  3. Start CLI de Copilot.

copilot

Connecting to Azure OpenAI

  1. Set the environment variables for Azure OpenAI.

    export COPILOT_PROVIDER_BASE_URL=https://YOUR-RESOURCE-NAME.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT-NAME
    export COPILOT_PROVIDER_TYPE=azure
    export COPILOT_PROVIDER_API_KEY=YOUR-AZURE-API-KEY
    export COPILOT_MODEL=YOUR-DEPLOYMENT-NAME
    

    Replace the following placeholders:

    • YOUR-RESOURCE-NAME: your Azure OpenAI resource name
    • YOUR-DEPLOYMENT-NAME: the name of your model deployment
    • YOUR-AZURE-API-KEY: your Azure OpenAI API key
  2. Start CLI de Copilot.

copilot

Connecting to Anthropic

  1. Set the environment variables for Anthropic:

    export COPILOT_PROVIDER_TYPE=anthropic
    export COPILOT_PROVIDER_API_KEY=YOUR-ANTHROPIC-API-KEY
    export COPILOT_MODEL=YOUR-MODEL-NAME
    

    Replace YOUR-ANTHROPIC-API-KEY with your Anthropic API key and YOUR-MODEL-NAME with the Claude model you want to use (for example, claude-opus-4-5).

  2. Start CLI de Copilot.

copilot

Running in offline mode

You can run CLI de Copilot in offline mode to prevent it from contacting GitHub's servers. This is designed for isolated environments where the CLI should communicate only with your local or on-premises model provider.

Importante

Offline mode only guarantees full network isolation if your provider is also local or within the same isolated environment. If COPILOT_PROVIDER_BASE_URL points to a remote endpoint, your prompts and code context are still sent over the network to that provider.

  1. Configure your provider environment variables as described in Configuring your provider.

  2. Set the offline mode environment variable:

    export COPILOT_OFFLINE=true
    
  3. Start CLI de Copilot.

copilot
Morty Proxy This is a proxified and sanitized view of the page, visit original site.