Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

Pre-submission Checklist

  • I have verified this would not be more appropriate as a feature request in a specific repository
  • I have searched existing discussions to avoid duplicates

Your Idea

Fine-Grained Resource Control for Multi-User Authorization

Motivation

Many MCP clients require per-user credentials to call downstream APIs, and the lack of a standard leads to inconsistent
client behavior and security gaps. This proposal standardizes OAuth-based authorization in MCP.

Background

Multiple ideas have been discussed in the MCP community about how to handle authorization for tools and resources in
multi-user scenarios:

  • Per-Tenant Configuration via _meta: One proposal
    Discussion #193 suggested allowing a
    single MCP server instance to serve many end-users by passing a clientId and user-specific clientConfig in each
    request

    • This would reconfigure the server per call (e.g. providing a different API token or target resource for each user)
      instead of requiring separate server instances.
    • This approach treats user credentials or context as part of the call metadata, enabling multi-tenant clients to
      dynamically inject config for each end-user.
  • MCP Server as OAuth 2.0 Resource Server:
    Issue #205 proposed that MCP servers
    should act as OAuth 2.0 resource servers, using external identity providers for authorization.

    • In this model, the MCP server itself doesn’t issue tokens or maintain session state; the client obtains an access
      token from an OAuth authorization server (via any standard flow) and presents it with requests
    • The MCP server can then rely on standard OAuth mechanisms – for example, returning an HTTP 401 with a
      WWW-Authenticate challenge or providing discovery info – to prompt the client to get proper tokens
    • This design leverages existing enterprise auth infrastructure (increasing adoptability) and keeps MCP servers
      stateless
    • It could also enable the server to perform an OAuth 2.0 Token Exchange (RFC 8693) on the client’s token to act on
      behalf of the user
    • This would allow the MCP server to act as a proxy for the client, using its own token to call downstream APIs on
      behalf of the user, obtaining a delegated token for downstream APIs.
  • On-Behalf-Of (OBO) Token Exchange: Related to the above, discussion
    Support On-Behalf-Of Token Exchange protocol for Agent-to-Agent Communications #214(Support On-Behalf-Of Token Exchange protocol for Agent-to-Agent Communications #214) raised support for OAuth 2.0 Token
    Exchange (RFC 8693) to avoid passing raw user tokens around. The idea is to enable delegation: an agent client might
    present one token to an MCP server, which exchanges it for a new token restricted to the server’s context (preventing
    misuse of the original token). This was motivated by security concerns that simply forwarding user tokens is risky

    • For example, the Cloudflare team illustrated a model where the MCP server issues its own token to the client
      instead of exposing the user’s Google API token, thereby limiting an attacker’s capabilities if the client token
      is compromised
    • This addresses the OWASP-described risk of “Excessive Agency”, by ensuring a stolen token can only invoke the MCP
      server’s constrained tools, not the upstream service broadly
  • Per-Tool OAuth Scopes (Multi-User Auth): In discussion
    Multi-user Authorization #234(Multi-user Authorization #234), wdawson proposed adding an
    authorization spec to each tool definition to declare what kind of token and scopes it requires

    • The MCP client (e.g. an AI agent) would then acquire an appropriate end-user token (say via OAuth2) and supply it
      at call time in the request metadata
    • This lets one agent serve many users, each providing their own credentials for external services, without the MCP
      server persisting those credentials. The proposal introduced a JSON-RPC error code for authorization failures
      (-32001 for missing or invalid tokens), analogous to HTTP 401 Unauthorized.
    • The emphasis was on keeping the server stateless and focused on proxying the tool action, while pushing token
      management to the client
    • This approach was seen as meeting the industry “where it is” with OAuth, rather than requiring new auth frameworks
  • Client vs. Server Responsibility Debate: There is ongoing discussion about the trade-offs of the client-managed
    token approach.

    • Many agree that the MCP server should stay focused on exposing tools and resources without storing user tokens. At
      the same time, concerns were raised about security and complexity for client developers. For instance, if each
      tool integration requires the client to implement a different OAuth flow, it burdens agent developers and could
      discourage use of certain tools
    • It was suggested that MCP needs a mechanism for the server to guide the client through authorization when needed
      for example, by providing an authorization URL or instructions if a token is missing. Others pointed out that
      standard OAuth consent flows already allow users to grant a subset of scopes, and the server could simply enforce
      scope requirements (skipping or failing a tool call if not authorized) without additional protocol changes
    • There was even a proposal that the server could handle the OAuth exchange and then hand the obtained token back to
      the client for storage , combining a smoother user onboarding with client-side token storage thereafter.

    In summary, the community has explored per-tool and per-resource auth scopes, multi-tenant call metadata, client vs
    server auth roles, token exchange, and error handling for auth. Building on those ideas, this proposal aims to
    consolidate a path forward for fine-grained resource control in MCP, aligning with established terminology and
    extending the protocol where needed.


Proposal Summary

Per-Tool / Resource Auth Metadata

Each tool or resource declares its own authorization requirements via a separate policy. The MCP server is responsible for enforcing this policy.

Field Type Purpose
protectedResourceMetadata object A [RFC 9728]-compliant JSON fragment describing the resource, including its URI and associated authorization_servers.
required_scopes string[] The minimal OAuth scopes required. If empty, only a bearer token is expected.
use_id_token boolean If true, the system requests an ID token instead of or in addition to an access token.
client_id string The confidential OAuth 2.0 client identifier registered with the Authorization Server.

Backend-for-Frontend (BFF) Architecture with Resource-Bound Access Tokens

To enforce strong security boundaries and maintain control over OAuth flows, the MCP architecture adopts a Backend-for-Frontend (BFF) pattern.
In this model, the MCP Server acts as an intermediary between the MCP Client (typically a browser or desktop application) and downstream protected resources.

Key Characteristics

  • MCP Client is Lightweight
    The client (frontend) does not directly handle sensitive tokens or secrets. It initiates OAuth flows via URLs provided by the MCP Server, but does not retain or manage access tokens.

  • MCP Server as the Confidential Client
    The MCP Server is the registered OAuth 2.0 confidential client. It performs the token exchange, holds client credentials, and manages scopes.

  • Resource-Bound Access Tokens
    Each issued access token is explicitly bound to a single resource (audience). This means:

    • The Authorization Server (AS) knows which resource the token is intended for.
    • Tokens cannot be reused across resources, reducing the blast radius of leaks or misuse.
    • Fine-grained policies (per resource) are easier to enforce.
  • Initiation Flow
    When the MCP Client requires access:

    1. The MCP Server constructs an /authorize URL with the appropriate client_id, scope, resource, and code_challenge.
    2. The MCP Client redirects the user-agent to that URL.
    3. On completion, the AS redirects back with an auth code, which the MCP Server exchanges for tokens.
    4. The MCP Server then acts on behalf of the user with the downstream resource.
  • No Token or Secret Exposure to Client
    The frontend never sees client secrets, access tokens, or refresh tokens. This maintains a strong separation of concerns and reduces the attack surface.

--

Two Authorization Mediation Modes

The MCP server supports two mediation flows for initiating OAuth authorization when credentials are missing or insufficient:


1. HTTP SSE Enforced Security

  • When a Server-Sent Events (SSE) request arrives without credentials, the MCP server responds with:

HTTP/1.1 401 Unauthorized

WWW-Authenticate: Bearer authorization_uri=".../authorize?resource=…,scope=…,client_id=…,code_challenge=…,code_challenge_method=…,redirect_uri=…,state=…",
protected_metadata="…/.well-known/oauth-protected-resource"
  • The client then initiates a front-channel Authorization Code + PKCE flow using the provided parameters, and retries the SSE request with:
    X-Authorization-Exchange: auth_code:, redirect_uri: ...

2. JSON-RPC Enforced Security

  • If the tools/call, resource/read method is invoked without valid credentials, the server responds with a structured authorization error:
{
  "jsonrpc": "2.0",
  "id": 42,
  "error": {
    "code": -32001,
    "message": "Unauthorized",
    "data": {
      "authorizationUri": ".../authorizer?resource=…,scope=…,client_id=…,code_challenge=…,code_challenge_method=…,redirect_uri=…,state=…"
    }
  }
}
  • The client then initiates a front-channel Authorization Code + PKCE flow using the provided parameters,and retries the request with:
{
  "jsonrpc": "2.0",
  "id": 42,
  "method": "tools/call",
  "params": {
    "name": "myTool",
    "arguments": {},
    "_meta": {
      "authorization": {
        "authCode": "xxxx...",
        "redirectUri": "..."
      }
    }
  }
}

💡 These two modes are proposed because MCP may operate over multiple transports—including non-HTTP channels like stdio—where traditional HTTP status codes and headers (e.g., 401 Unauthorized, WWW-Authenticate) are not applicable.
In those environments, JSON-RPC error responses serve the same purpose: to deliver authorization metadata and trigger the OAuth flow.
This dual-mode approach ensures consistent security regardless of the transport protocol.


Token Redemption & Caching (MCP Server Side)

  • MCP server redeems the code (PKCE verifier + client_id) at the AS.
  • Access-tokens are cached per (resource, scope, client) triple.
  • Refresh or silent-refresh when possible; never expose tokens outside the server boundary.
  • MCP server should follow secure storage and logging best practices.

Proposal Details

Centralized Authorization Policy bounded to Global MCP, Tools and Resources

Idea: Every tool or resource advertises the auth rule it needs.

Field Type Purpose
protectedResourceMetadata object RFC 9728 — JSON object containing the OAuth 2.0 Protected Resource Metadata (as served from /.well-known/oauth-protected-resource/resource=RESOURCE_URI).
requiredScopes string[] The minimal OAuth scopes the caller must present. An empty array means “bearer token is enough; no extra scope checking.”
useIdToken boolean true → supply an OpenID Connect ID token instead of / in addition to an OAuth access-token.
clientId string Confidential OAuth 2 client id

Authorization rules are defined in a separate configuration schema that maps to tools, resources, or global defaults.

{
  "global": {
    "protectedResourceMetadata": {
      "resource": "MCPServer",
      "authorizationServers": [
        "https://auth.acme-cloud.com"
      ]
    },
    "requiredScopes": [
      "scope1"
    ],
    "useIdToken": false,
    "clientId": "myClientId1"
  },
  "tools": [
    {
      "protectedResourceMetadata": {
        "resource": "myTool",
        "authorizationServers": []
      },
      "requiredScopes": [
        "scope1"
      ],
      "clientId": "myClientId2"
    }
  ],
  "resources": [
    {
      "protectedResourceMetadata": {
        "resource": "s3://myBucketX/asset",
        "authorizationServers": []
      },
      "requiredScopes": [
        "read_write"
      ],
      "useIdToken": true
    },
    {
      "protectedResourceMetadata": {
        "resource": "gs://myBucketY/asset",
        "authorizationServers": []
      },
      "requiredScopes": [
        "read"
      ],
      "useIdToken": true,
      "clientId": "myClientId3"
    }
  ]
}

Each entry in this external policy schema corresponds to the Protected Resource Metadata (RFC 9728) and is extended with MCP-specific fields:

  • requiredScopes: the minimal OAuth scopes that the client must present.
  • useIdToken: a boolean indicating if an OpenID Connect ID token should be used instead of (or in addition to) an
    access token.
  • clientId: the client ID to use for the OAuth2 authorization server.

This unified representation lets operators configure resource metadata and per-resource authorization requirements in
one place, without modifying individual tool definitions.


Backend-for-Frontend (BFF) Architecture with Resource-Bound Access Tokens

To authorize access to MCP tools and resources, this proposal adopts a Backend-for-Frontend (BFF) architecture with resource-bound access tokens. This ensures that tokens are only valid for their intended audience and cannot be reused across services.

Clients do not proactively fetch authorization policy. Instead, when calling a tool without prior authorization, the MCP server responds with a 401 Unauthorized error that includes the necessary authorization metadata. The client then uses this metadata to initiate the authorization flow.

Specifically, the MCP client launches an OAuth 2.0 Authorization Code flow with PKCE, using the provided parameters such as client_id, resource, code_challenge, and state.
After user authorization, the client receives an authorization code. The MCP server then redeems the code—along with the code verifier, client credentials, and any required parameters

a) HTTP SSE Enforced Security — Sequence Diagram

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer authorization_uri=".../authorize?resource=…,scope=…,client_id=…,code_challenge=…,code_challenge_method=…,redirect_uri=…,state=…"
sequenceDiagram
    participant MCP_Client as MCP Client
    participant OAuth2_Client as OAuth2 Client
    participant MCP_Server as MCP Server
    participant Auth_Server as Authorization Server
    participant Resource as Protected Resource

    MCP_Client->>MCP_Server: tools/list
    MCP_Server-->>MCP_Client: tool list 

    MCP_Client->>MCP_Server: tools/call (not authorized yet)
    MCP_Server-->>MCP_Client: error 401 WWW-Authorize authorization_uri="..."
    MCP_Client->>Auth_Server:  Authorization Request (resource, client_id, PKCE, state, scope, redirect_uri)
    Auth_Server-->MCP_Client: auth code
    MCP_Client->>MCP_Server: tools/call (with token in X-Authorization-Exchange: auth_code=code, redirect_uri=https://localhost:port/callback)}
    OAuth2_Client->>Auth_Server: Token Request (auth_code + PKCE verifier)
    Auth_Server-->>OAuth2_Client: Access Token
    OAuth2_Client-->>MCP_Server: return access token
    MCP_Server->>Resource: tool invocation with bearer token
    Resource-->>MCP_Server: execution result
    MCP_Server-->>MCP_Client: result
Loading

b) JSON-RPC Enforced Security

sequenceDiagram
    participant MCP_Client as MCP Client
    participant OAuth2_Client as OAuth2 Client
    participant MCP_Server as MCP Server
    participant Auth_Server as Authorization Server
    participant Resource as Protected Resource

    MCP_Client->>MCP_Server: tools/list
    MCP_Server-->>MCP_Client: tool list 

    MCP_Client->>MCP_Server: tools/call (not authorized yet)
    MCP_Server-->>MCP_Client: error.code -32001 error.data=(authorizationUri="...")
    MCP_Client->>Auth_Server:  Authorization Request (resource, client_id, PKCE, state, scope, redirect_uri)
    Auth_Server-->MCP_Client: auth code
    MCP_Client->>MCP_Server: tools/call (with token in _meta.authorization(authCode=code, redirectUri=https://localhost:port/callback)
    OAuth2_Client->>Auth_Server: Token Request (code + PKCE verifier)
    Auth_Server-->>OAuth2_Client: Access Token
    OAuth2_Client-->>MCP_Server: return access token
    MCP_Server->>Resource: tool invocation with bearer token
    Resource-->>MCP_Server: execution result
    MCP_Server-->>MCP_Client: result
Loading

Client-Side Implementation

When a tool requires authorization, the MCP client starts the OAuth 2.0 Authorization-Code flow with PKCE.
To capture the authorization response, you MAY choose one of three strategies:

Option Typical platforms Redirect-URI pattern Who receives the code?
A Loopback listener Desktop CLIs, native apps http://127.0.0.1:{port}/callback Client
B Web-Message (postMessage) Browser SPAs, Electron widgets https://sdk.example.com/oauth/relay.html Client
C Direct MCP redirect Any platform https://mcp.example.com/oauth/callback MCP server

All options MUST use the state parameter to bind request ↔ response (RFC 6749 §4.1.1) and MUST include PKCE.

Option A – Loopback Listener (RFC 8252)

  1. Start a local HTTP server on a random port, e.g. http://127.0.0.1:38545/callback.
  2. Open the browser to /authorize with client_id, state, PKCE params, and the loopback redirect_uri.
  3. Receive
    GET /callback?code=…&state=…
  4. POST to MCP with X-Authorization-Exchange: auth_code=…, redirect_uri=http://127.0.0.1:38545/callback
  5. The MCP server redeems the code at /token, passing the same redirect_uri.

Option B – Web-Message (postMessage) Redirect

  1. Open a browser (or popup) to /authorize with response_mode=web_message or redirect_uri=https://sdk.example.com/oauth/relay.html

  2. Authorization server redirects to a minimal relay.html that immediately executes:

<script>
  window.opener.postMessage(
    {
      code: new URLSearchParams(location.search).get('code'),
      state: new URLSearchParams(location.search).get('state')
    },
    window.opener.location.origin
  );
  window.close();
</script>
  1. Parent window verifies state and POSTs to MCP with X-Authorization-Exchange: auth_code=…,
  2. The MCP server redeems the code at /token.

Option C – Direct MCP Redirect (client bypass)

  1. MCP server pre-computes state, PKCE, and a server-controlled redirect_uri, e.g. https://mcp.example.com/oauth/callback.
  2. Client opens the browser to that URI.
  3. Authorization server redirects directly to the MCP callback: GET /oauth/callback?code=…&state=…
  4. The MCP server redeems the code at /token

No X-Authorization-Exchange header is needed in this option because the client never sees the code.

Note that this option can be implemented with user interaction as proposed in #475


Passing /authorize data over SSE: custom header?

The BFF pattern doesn’t specify a standard way to forward the OAuth authorization code, so one idea is to place it in a custom header such as X-Authorization-Exchange.

  • Separation of concerns – The auth_code and redirect_uri travel in a dedicated header, keeping application-level SSE payloads clean.
  • Avoids misusing Authorization: Bearer – Because the code is only an intermediate artifact, labeling it as a full access token would be misleading.

Alternative Approaches Considered (As Per JSON-RPC)

1. Inline Metadata in Request Body or Payload

  • Example:
    {
      "request": { ... },
      "_meta": {
        "authorization": {
          "authCode": "xyz",
          "redirectUri": "http://127.0.0.1:38545/callback"
        }
      }
    }
    
    
    
    

Security Considerations

  • Refresh tokens should be stored securely; MCP Server OAuth2 clients should follow best practices for token rotation and expiration.
  • Servers MUST NOT log tokens or include them in error messages.
  • Servers MAY support token revocation and introspection (RFC 7662) to detect and respond to compromised tokens.

References

  • RFC 6749: OAuth 2.0 Authorization Framework
  • RFC 6750: OAuth 2.0 Bearer Token Usage
  • RFC 7519: JSON Web Token (JWT)
  • RFC 7636: Proof Key for Code Exchange (PKCE)
  • RFC 7662: OAuth 2.0 Token Introspection
  • RFC 9728: Protected Resource Metadata (OAuth 2.0)
  • RFC 8693: OAuth 2.0 Token Exchange
  • RFC 8707: Resource Indicators for OAuth 2.0
  • RFC 8752: OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer

Design Considerations and Trade-offs

Initial proposals favored having clients handle the entire OAuth 2.0 flow and manage token storage. However, as @wdawson
pointed out, this approach introduces security vulnerabilities (the "confused deputy" problem).

The updated design delegates token management to the MCP server, except for initiating the OAuth 2.0 authorization code
flow: the server provides the client with the authorization endpoint, client ID, and PKCE parameters, and the client
returns the authorization code. The MCP server then redeems the code to obtain an access token, caches it, and refreshes
it as needed.

This proposal takes a pragmatic approach by treating the MCP client as an OAuth 2.0 client and the MCP server as an
OAuth 2.0 resource server, leveraging standard flows and existing infrastructure.
In practice, agents acting on behalf of users are already trusted with user data and actions, so they are responsible
for securely initiating the authorization flow.

Scope

  • Protocol Specification
  • SDK Features
  • Documentation
  • Developer Experience
  • Other
You must be logged in to vote

Replies: 4 comments · 4 replies

Comment options

Thanks for the proposal @adranwit — it's clearly well thought out.

However, it seems like this proposal still expects the MCP client to have the tokens that the MCP server needs to authorize against other downstream services as part of tools. Is my understanding correct there?

If so, that is a dangerous security vulnerability as it mixes roles and exposes the tokens that the MCP server should be the OAuth client for and the MCP client should never have access to. This is a critical requirement in the OAuth security boundary.

As an alternative, I think we can use the existing MCP Authorization spec to provide fine grained authorization from MCP client to MCP server for each resource required.

Downstream-tool auth can be handled via OAuth token exchange (where supported). And, in the future, the identity assertion grant, which will improve the user experience.

When token exchange is not supported, we are proposing a way for the MCP server to request user interaction in #475
This allows the MCP server to own the credentials (OAuth tokens, API keys, etc.) for any downstream services its tools need.

You must be logged in to vote
3 replies
@adranwit
Comment options

Thanks for raising this concern — it's important and appreciated.

You're right to highlight that role separation and token security are critical in OAuth. However, in this design, the MCP Client uses a matched OAuth client for the protected resource it interacts with. The token it obtains is appropriately scoped for that resource and passed to the MCP Server, which then delegates to the tool strictly within that scope and context.

From an OAuth perspective, this model is secure and consistent with the delegation pattern because:

  • The Authorization Server issues the token to the MCP Client with full awareness of the client identity and consent.
  • The MCP Client is not bypassing security boundaries — it is the authorized client, and the token is used in a controlled and auditable way.
  • The MCP Server and tool are trusted components within the same security domain, acting on the user’s behalf within the scope of the original authorization.

It’s also important to note that the MCP Server typically resides in a different infrastructure zone or trust boundary from the MCP Client. Requiring the MCP Server to act as the OAuth client would complicate the architecture — introducing cross-boundary authentication, additional latency, and tighter coupling to individual OAuth providers — all of which reduce flexibility and resilience.

Regarding the existing proposal: while the current design leverages HTTP and Server-Sent Events (SSE) and PRM— which can support fine-grained access control via interceptors (though it still requires protected resource metadata at the tool/resource level) — my proposal intentionally moves beyond HTTP, adopting JSON-RPC–level integration instead.

In this broader context, allowing the MCP Client to pass tokens it is legitimately issued — and scoped for downstream use — is a practical and secure approach.

@wdawson
Comment options

There's a very tricky nuance here, @adranwit and it's one that wasn't clear until the Authorization spec was updated. When I wrote up #234 this was maybe acceptable, depending on interpretation, but now with the new Authorization spec, it's not.

The first two bullet points are fine, but the third is incorrect. We cannot assume that the MCP server and MCP client are within the same security domain in general. In some architectures, that may be the case, but the MCP specification cannot assume that and MUST define the protocol for when that is not the case. Notably, the section on token handling in the Authorization spec covers this. One specific security issue when doing this is the confused deputy problem.

Requiring the MCP server to act as an OAuth client for flows that need to use OAuth in downstream API calls is a security requirement, not a "complication". For example, if Alice builds a file-sharing MCP server that calls the Dropbox APIs she must create an OAuth app/client with Dropbox and Dropbox should issue tokens directly to Alice's MCP server for the users of that server.

You're right that there are tight coupling to these individual OAuth providers, but there are already tight couplings to those APIs that the MCP servers are calling. There are recent OAuth RFCs and drafts to simplify this process. However, very few services support those things today. In order to meet the industry where it is, I am proposing a form of user interaction in #475 . That retains flexibility and resilience while maintaining proper security boundaries.

Note that if, instead, Dropbox is creating an MCP server that calls their own APIs, the MCP authorization server may be the same authorization server that protects their APIs that an MCP tool might call. In that case, Dropbox is free to use any mechanism they have internally to call those APIs, including an OAuth token exchange or some other security mechanism. The important part to note is that this tool calling authorization should remain outside of the MCP specification because it does not need to involve any MCP client/server communication.

@adranwit
Comment options

Thanks for pointing out the confused deputy issue, @wdawson — it's a critical nuance, and I’m revising the proposal with that in mind.

Let me share a few thoughts, before updating the proposal.

While RFC 8693 (OAuth 2.0 Token Exchange) makes sense conceptually — especially for securing delegation across boundaries — I’m still not convinced that dynamic OAuth client registration should be relied on as a baseline. In practice, there are real-world deployment challenges (e.g., client trust, key management, and registration approval workflows) that make it difficult to assume broad support or safe usage in general-purpose architectures.

Passing the protected resource (from PRM) during the grant process is sound — it aligns with RFC 8707, enabling the Authorization Server to scope tokens precisely.

Regarding the OAuth client itself, I would generally expect it to be a confidential client. One practical way to address the confused deputy risk — while preserving the client/server separation — is to store the OAuth client credentials (client_id and private key or secret) solely on the MCP Server.

In this model:

When authentication is needed, the MCP Server sends the relevant client_id,scope,resource,code_challenge to the MCP Client, which initiates the /authorize request. (No oauth client secret or private key is passed.)

The MCP Client completes the front-channel flow and receives the authorization code.

The Client returns the auth code to the MCP Server, which then redeems it at /token using its own credentials as a confidential client.

This preserves key security boundaries:

The MCP Client never holds downstream tokens or secrets.

The MCP Server retains full control over access scopes and keys.

The Authorization Server can safely bind tokens to a specific audience .

If necessary, the MCP Server can still perform token exchange (RFC 8693) within its own trust boundary.

This setup eliminates the confused deputy vector, avoids requiring dynamic client registration, and still allows the user to initiate authorization flows from their own device or UI — without compromising security.

Comment options

Regarding the OAuth client itself, I would generally expect it to be a confidential client.

Agreed, but just to be clear: with respect to "downstream" resources (like a third-party resource server), the MCP server is the OAuth client. The MCP server is also an OAuth client (with respect to the MCP server only), but it is a public client by definition because it often is a browser app or desktop app.
I think we already agree on this, but wanted to clear it up for anyone else reading. There are a lot of "clients" in the mix!

When authentication is needed, the MCP Server sends the relevant client_id,scope,resource,code_challenge to the MCP Client, which initiates the /authorize request. (No oauth client secret or private key is passed.)

... snip ...

This preserves key security boundaries:

The MCP Client never holds downstream tokens or secrets.

The MCP Server retains full control over access scopes and keys.

The Authorization Server can safely bind tokens to a specific audience

I think this is the right way to do it. 👍 By preserving the security boundary around the MCP server, it also means that there is no need to publish metadata like required scopes per-tool/resource.

If you generalize it a tiny bit further to just "the MCP server sends a URL to the MCP client", it's a lot like what @wdawson and I wrote up in #475. I'd love your comments!

You must be logged in to vote
1 reply
@adranwit
Comment options

Thanks for the thoughtful feedback, @nbarbettini.

You're absolutely right—there are a lot of "clients" involved, so clarity is essential. In my proposal, each resource typically has its own OAuth2 client. Some tools may share a common client, but in general, there's one confidential OAuth2 client per resource.

To manage this complexity and enforce security boundaries, I’m introducing a BFF (Backend-for-Frontend) layer between the MCP server and the MCP client. This BFF uses resource-bound access tokens to ensure tokens are only valid for their intended audience.

Regarding client types:

Yes, I agree—the MCP server acts as a confidential client when calling downstream resources (like third-party APIs). But in its interaction with the MCP client (often a browser or desktop app), it behaves like a public client—since no secrets or private keys are exposed. (Technically, it still uses a confidential client ID, but no client secret is shared.)

Here’s the flow:

When authentication is required, the MCP server generates a URL containing the client_id, scope, resource, and code_challenge.

That URL is passed to the MCP client, which initiates the /authorize request.

No secrets or private keys are sent to the MCP client.

This design preserves key security guarantees:

The MCP client never handles downstream tokens or secrets.

The MCP server maintains full control over scopes and access.

The Authorization Server can issue tokens securely bound to specific resources.

I also like your idea to generalize the model a bit—saying “the MCP server sends a URL to the MCP client” lines up nicely with the model you and @wdawson proposed in #475. I’ll take a closer look there and would be happy to chime in!

Comment options

Updated various Client-Side strategy implementation with Backend For Frontend flow.

You must be logged in to vote
0 replies
Comment options

For what it's worth, architecture diagram with the MCP server -> Authorization server bears a lot of resemblance to Kubernetes admission control. Kubernetes supports policies directly, but also supports a webhook as a pressure-relief valve for more complex or newly emerging patterns. Notably, K8s added policy support later because the webhooks became a reliability issue, and common usage patterns were better understood by that point. For inspiration, consider: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers

Additionally, signed access conditions attached to an opaque token is sort of how things like Google's Credential Access Boundaries work: https://cloud.google.com/iam/docs/downscoping-short-lived-credentials

In all likelihood, you'll want both concepts.

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.