Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

mcowger
Copy link
Contributor

@mcowger mcowger commented Sep 26, 2025

Context

Introduce a self-service “Provider Defined” integration so users can onboard any OpenAI-compatible provider without waiting on core updates. The work adds full backend support (manifest + models fetching, embedded JSON ingestion), frontend configuration with manifest/embedded toggles, end-to-end tests, and accompanying documentation.

Implements Proposal: #2329

Implementation

  • Added provider-defined handler, fetcher, schema updates, and request plumbing (including embedded JSON parsing).
  • Built a Provider Defined settings panel with manifest/embedded source toggle, fetch/parse flows, error surfacing, and model dropdown.
  • Extended UI and backend tests to cover new behaviours and ensured type safety by updating shared schemas and selectors.
  • Documented the workflow and updated docs navigation to expose the new provider option.

Screenshots

Screenshot 2025-09-26 at 3 21 53 PM

How to Test

  1. Open Kilo Code settings → Provider Defined.
  2. Manifest URL flow:
  3. Select “Manifest URL”, enter http://localhost:3000/v1/models?provider=true, click Fetch.
  4. Confirm the provider name (e.g. “Test Provider”) appears and the model dropdown lists provider/model-name-v1.
  5. Run a chat using that model; you should receive the stubbed test response (see sample test server output).
  6. Embedded JSON flow:
  7. Select “Embedded JSON”, paste the two-entry JSON array provided in docs, click Parse.
  8. Confirm models populate without hitting the endpoint and initiate a chat to verify responses.
  9. Optional: Switch back and forth between sources to ensure state resets gracefully (errors cleared, models reloaded).

Get in Touch

Added a new VS Code task for launching the OpenAI test server with background execution and problem matching.
Updated the OpenAI test server to default to 0% error rate and added support for provider manifest and extended models endpoints.
The extended models endpoint returns detailed model information including capabilities, costs, and limits.
The provider endpoint returns provider metadata for integration purposes.
Copy link

changeset-bot bot commented Sep 26, 2025

⚠️ No Changeset found

Latest commit: abb5d48

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Morty Proxy This is a proxified and sanitized view of the page, visit original site.