feat(backend): add insforge.toml config-as-code routes#1200
feat(backend): add insforge.toml config-as-code routes#1200tonychang04 wants to merge 8 commits intomainInsForge/InsForge:mainfrom feat/insforge-config-tomlInsForge/InsForge:feat/insforge-config-tomlCopy head branch name to clipboard
insforge.toml config-as-code routes#1200Conversation
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds settings.ts and buckets.ts thin wrappers around the existing AuthConfigService and StorageService so the config service has clean seams. upsertBucket lives in storage/buckets.ts and provides the idempotent create-or-update semantic the spec requires. Note: jwt_expiry, enable_signup, and site_url have no backing columns in auth.config today; the reader returns undefined for them and the writer is a no-op for those fields until the schema grows real columns. allowedRedirectUrls maps to additional_redirect_urls. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires the new config service into the API layer. The OSS backend is single-tenant — one Postgres per InsForge instance — so all config-apply requests serialize on a fixed advisory key (`default`). Multi-tenant cloud deployments would supply req.projectId from auth middleware. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…s only Of the four originally-scoped [auth] fields, only additional_redirect_urls is backed by a real column on auth.config today. Sending jwt_expiry, enable_signup, or site_url through setAuthSettings silently no-ops, which breaks idempotence: the diff would re-surface them on every plan. Trim schema, diff, apply, read, and the settings wrapper down to just additional_redirect_urls. The other three fields are punted to a follow-up plan that adds the backing columns to auth.config. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
WalkthroughIntroduces a complete configuration-as-code feature for backend infrastructure. Adds schema validation, config diffing, config application with dry-run and prune modes, PostgreSQL advisory locking for concurrency, authentication and storage bucket management wrappers, HTTP API endpoints, and supporting documentation. Routes are integrated into the main server. ChangesConfiguration Management System
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (4)
backend/src/services/storage/buckets.ts (1)
26-34: 💤 Low valueTOCTOU race in
upsertBucketmitigated by advisory lock.The check-then-act pattern (
bucketExists→createBucket) is susceptible to race conditions if called concurrently. Since the apply flow wraps calls inwithConfigApplyLock, this is safe in practice. Consider adding a brief comment noting that callers must hold the config apply lock, or make the function internally safe with a try-catch aroundcreateBucketthat handles "already exists" errors.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/src/services/storage/buckets.ts` around lines 26 - 34, The upsertBucket function uses a check-then-act pattern (bucketExists → createBucket) that is vulnerable to a TOCTOU race; make it explicit and safe by either adding a short comment above upsertBucket that callers must hold the withConfigApplyLock when calling it, or make the function itself resilient by wrapping the createBucket call in a try-catch that treats an "already exists" error as success and proceeds to update visibility (use the existing bucketExists, createBucket, updateBucketVisibility symbols for context); prefer the try-catch approach if you want internal safety, otherwise document the lock requirement clearly.backend/src/services/config/diff.ts (1)
106-117: 💤 Low value
deepEqualis limited to arrays and primitives.The current implementation handles arrays and primitives but returns
falsefor object comparisons (other than arrays). This is sufficient for the current MVP scope (comparing string arrays for redirect URLs), but if you later add nested object fields, this will need to be extended.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/src/services/config/diff.ts` around lines 106 - 117, deepEqual currently only compares primitives and arrays and returns false for plain objects; update the deepEqual function to also handle plain object comparison by: treating null separately, detecting non-array objects (typeof === "object" && !Array.isArray), comparing their key sets length and presence, and recursively calling deepEqual on each corresponding property value (use the existing array handling for arrays and === for primitives as fallbacks). Ensure the function still short-circuits identical references (a === b) and preserves array handling in deepEqual.backend/src/services/config/schema.ts (1)
32-51: ⚖️ Poor tradeoffConsider using Zod schemas for validation.
The manual validation logic works but is more verbose and doesn't provide automatic type inference. As per coding guidelines: "Use Zod schemas for request/response validation in backend". Zod would also give you stricter validation (rejecting unknown keys) and better error messages out of the box.
Example Zod schema structure
import { z } from 'zod'; export const BucketConfigSchema = z.object({ public: z.boolean().optional(), }).strict(); export const StorageConfigSchema = z.object({ buckets: z.record(z.string(), BucketConfigSchema).optional(), }).strict(); export const AuthConfigSchema = z.object({ additional_redirect_urls: z.array(z.string()).optional(), }).strict(); export const InsforgeConfigSchema = z.object({ project_id: z.string().optional(), auth: AuthConfigSchema.optional(), storage: StorageConfigSchema.optional(), }).strict(); export type InsforgeConfig = z.infer<typeof InsforgeConfigSchema>; export function validateConfig(input: unknown): InsforgeConfig { return InsforgeConfigSchema.parse(input); }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/src/services/config/schema.ts` around lines 32 - 51, Replace the hand-rolled validation in validateConfig with a Zod schema: add imports from z, define BucketConfigSchema, StorageConfigSchema, AuthConfigSchema and InsforgeConfigSchema using z.object(...).strict() and optional() for optional fields, derive the InsforgeConfig type with z.infer<typeof InsforgeConfigSchema>, and update validateConfig to simply return InsforgeConfigSchema.parse(input); remove the manual checks around project_id/auth/storage and stop calling validateAuth/validateStorage (instead embed their schemas or reuse them) so unknown keys are rejected and Zod error messages are returned.backend/src/api/routes/config/apply.ts (1)
14-16: ⚡ Quick winPrefer env-backed project fallback before
'default'.Line 15 currently falls back directly to
'default'. To match existing project-scoping conventions, useprocess.env.PROJECT_IDbefore the single-project sentinel.Suggested fix
function getProjectRefFromRequest(req: AuthRequest): string { - return req.projectId ?? SINGLE_TENANT_PROJECT_KEY; + return req.projectId ?? process.env.PROJECT_ID ?? SINGLE_TENANT_PROJECT_KEY; }Based on learnings, "derive the project ID from
req.projectId... or fromprocess.env.PROJECT_ID(self-hosted). Use'default'only as an intentional fallback for single-project setups."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/src/api/routes/config/apply.ts` around lines 14 - 16, The getProjectRefFromRequest function currently returns req.projectId or SINGLE_TENANT_PROJECT_KEY; change it to prefer an env-backed project by returning req.projectId ?? process.env.PROJECT_ID ?? SINGLE_TENANT_PROJECT_KEY so that process.env.PROJECT_ID is used for self-hosted deployments and SINGLE_TENANT_PROJECT_KEY ('default') remains only the last-resort sentinel.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@backend/src/api/routes/config/apply.ts`:
- Around line 38-40: The chained response call is misformatted for Prettier;
replace the multi-line chain with a single properly punctuated statement (e.g.,
return res.status(400).json({ error: 'config_validation', message: err.message,
path: err.path });) so the res.status(...).json(...) call is on one line with
proper semicolon/return to satisfy formatting/lint rules in the apply handler.
- Around line 24-33: The route is currently trusting body.dry_run and body.prune
which can be non-boolean (e.g. "false") and incorrectly enable behaviors; add a
Zod schema to validate/parse req.body (ensuring config?: unknown, dry_run?:
boolean, prune?: boolean) and use schema.parse or safeParse to coerce/validate
values before calling withConfigApplyLock/applyConfig so you always pass actual
booleans (defaulting to false when absent or invalid); reference the request
handler that reads body and the applyConfig call to replace the ad-hoc casting
with the validated result.
In `@backend/src/services/config/apply.test.ts`:
- Around line 33-44: The test title promises applied=true but doesn't assert it;
update the test that calls applyConfig (in
backend/src/services/config/apply.test.ts) to capture the return value from
applyConfig(...) and add an assertion that the returned object's applied
property is true (e.g., expect(result.applied).toBe(true)), keeping the existing
check that setAuthSettings was called with the expected additionalRedirectUrls;
target the applyConfig invocation and the setAuthSettings mock in your change.
In `@backend/src/services/config/apply.ts`:
- Around line 27-48: The current apply logic may persist auth via
setAuthSettings() before bucket changes (upsertBucket/deleteBucket) run, causing
partial updates on failure; modify apply to run all changes inside a single
database transaction (or use an existing transactional API) so setAuthSettings,
upsertBucket and deleteBucket are executed atomically, or if transactions are
unavailable implement compensating rollback that reverses setAuthSettings and
any upsertBucket/deleteBucket effects when a later step fails; update code paths
that call setAuthSettings, upsertBucket, and deleteBucket to accept/participate
in the transaction (or return undo information) and ensure plan.changes
iteration either runs inside that transaction scope or triggers rollback on
error.
- Around line 23-25: The response sets applied: true even when no mutation
happens (e.g., all plan.changes are remove operations with kept === true and
prune is disabled) — update the logic that computes the returned applied flag to
reflect whether any actual mutation occurred: when preparing the result in apply
(use the same scope where input.dry_run, plan.changes and the final applied
value are set) compute a boolean like mutated = plan.changes.some(c => !(c.type
=== 'remove' && c.kept === true)) (and treat dry_run as always false for
mutation detection), then return { plan, applied: mutated } (preserving the
existing early return for input.dry_run or empty changes but ensure
applied=false in those cases).
In `@docs/agent-docs/config-as-code.md`:
- Around line 68-71: The docs overstate concurrency: clarify that taking a
Postgres advisory lock for each `apply` keyed on `(project, "config_apply")`
only enables cross-project parallel `apply` calls in multi-tenant/project-scoped
deployments; in single-tenant OSS installs we serialize all applies on a fixed
project key unless `req.projectId` is present. Update the Concurrency section to
make this conditional (e.g., “in multi-tenant/project-scoped deployments (when
`req.projectId` is set) calls for different projects run in parallel; otherwise
all applies are serialized on a single project key”).
---
Nitpick comments:
In `@backend/src/api/routes/config/apply.ts`:
- Around line 14-16: The getProjectRefFromRequest function currently returns
req.projectId or SINGLE_TENANT_PROJECT_KEY; change it to prefer an env-backed
project by returning req.projectId ?? process.env.PROJECT_ID ??
SINGLE_TENANT_PROJECT_KEY so that process.env.PROJECT_ID is used for self-hosted
deployments and SINGLE_TENANT_PROJECT_KEY ('default') remains only the
last-resort sentinel.
In `@backend/src/services/config/diff.ts`:
- Around line 106-117: deepEqual currently only compares primitives and arrays
and returns false for plain objects; update the deepEqual function to also
handle plain object comparison by: treating null separately, detecting non-array
objects (typeof === "object" && !Array.isArray), comparing their key sets length
and presence, and recursively calling deepEqual on each corresponding property
value (use the existing array handling for arrays and === for primitives as
fallbacks). Ensure the function still short-circuits identical references (a ===
b) and preserves array handling in deepEqual.
In `@backend/src/services/config/schema.ts`:
- Around line 32-51: Replace the hand-rolled validation in validateConfig with a
Zod schema: add imports from z, define BucketConfigSchema, StorageConfigSchema,
AuthConfigSchema and InsforgeConfigSchema using z.object(...).strict() and
optional() for optional fields, derive the InsforgeConfig type with
z.infer<typeof InsforgeConfigSchema>, and update validateConfig to simply return
InsforgeConfigSchema.parse(input); remove the manual checks around
project_id/auth/storage and stop calling validateAuth/validateStorage (instead
embed their schemas or reuse them) so unknown keys are rejected and Zod error
messages are returned.
In `@backend/src/services/storage/buckets.ts`:
- Around line 26-34: The upsertBucket function uses a check-then-act pattern
(bucketExists → createBucket) that is vulnerable to a TOCTOU race; make it
explicit and safe by either adding a short comment above upsertBucket that
callers must hold the withConfigApplyLock when calling it, or make the function
itself resilient by wrapping the createBucket call in a try-catch that treats an
"already exists" error as success and proceeds to update visibility (use the
existing bucketExists, createBucket, updateBucketVisibility symbols for
context); prefer the try-catch approach if you want internal safety, otherwise
document the lock requirement clearly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cb6788b7-9f1a-4660-b863-348c3c1f6b69
📒 Files selected for processing (15)
backend/src/api/routes/config/apply.tsbackend/src/api/routes/config/get.tsbackend/src/api/routes/config/index.routes.tsbackend/src/server.tsbackend/src/services/auth/settings.tsbackend/src/services/config/apply.test.tsbackend/src/services/config/apply.tsbackend/src/services/config/diff.test.tsbackend/src/services/config/diff.tsbackend/src/services/config/lock.test.tsbackend/src/services/config/lock.tsbackend/src/services/config/read.tsbackend/src/services/config/schema.tsbackend/src/services/storage/buckets.tsdocs/agent-docs/config-as-code.md
| const body = req.body as { config?: unknown; dry_run?: boolean; prune?: boolean }; | ||
|
|
||
| try { | ||
| const pool = DatabaseManager.getInstance().getPool(); | ||
| const result = await withConfigApplyLock(pool, projectRef, () => | ||
| applyConfig({ | ||
| config: body.config, | ||
| dry_run: body.dry_run ?? false, | ||
| prune: body.prune ?? false, | ||
| }) |
There was a problem hiding this comment.
Validate dry_run and prune as booleans before apply.
Lines 31-32 accept any non-null value; "false" (string) is truthy and can incorrectly enable dry-run/prune behavior. This can cause unintended bucket deletions.
Suggested fix
import type { Request, Response, NextFunction } from 'express';
+import { z } from 'zod';
import { applyConfig } from '@/services/config/apply.js';
...
+const applyRequestSchema = z.object({
+ config: z.unknown(),
+ dry_run: z.boolean().optional(),
+ prune: z.boolean().optional(),
+});
+
export async function handleApplyConfig(
req: Request,
res: Response,
next: NextFunction
): Promise<void> {
const projectRef = getProjectRefFromRequest(req as AuthRequest);
- const body = req.body as { config?: unknown; dry_run?: boolean; prune?: boolean };
+ const parsed = applyRequestSchema.safeParse(req.body);
+ if (!parsed.success) {
+ res.status(400).json({ error: 'invalid_request', message: parsed.error.message });
+ return;
+ }
+ const body = parsed.data;As per coding guidelines, "Use Zod schemas for request/response validation in backend".
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/src/api/routes/config/apply.ts` around lines 24 - 33, The route is
currently trusting body.dry_run and body.prune which can be non-boolean (e.g.
"false") and incorrectly enable behaviors; add a Zod schema to validate/parse
req.body (ensuring config?: unknown, dry_run?: boolean, prune?: boolean) and use
schema.parse or safeParse to coerce/validate values before calling
withConfigApplyLock/applyConfig so you always pass actual booleans (defaulting
to false when absent or invalid); reference the request handler that reads body
and the applyConfig call to replace the ad-hoc casting with the validated
result.
| res | ||
| .status(400) | ||
| .json({ error: 'config_validation', message: err.message, path: err.path }); |
There was a problem hiding this comment.
Prettier formatting at Lines 38-40 is failing CI.
This style mismatch is currently blocking lint-and-format checks.
Suggested fix
- res
- .status(400)
- .json({ error: 'config_validation', message: err.message, path: err.path });
+ res.status(400).json({ error: 'config_validation', message: err.message, path: err.path });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| res | |
| .status(400) | |
| .json({ error: 'config_validation', message: err.message, path: err.path }); | |
| res.status(400).json({ error: 'config_validation', message: err.message, path: err.path }); |
🧰 Tools
🪛 ESLint
[error] 38-40: Replace ⏎········.status(400)⏎········ with .status(400)
(prettier/prettier)
🪛 GitHub Actions: Lint and Format Check / 0_lint-and-format.txt
[error] 38-38: prettier/prettier: Replace ⏎········.status(400)⏎········ with .status(400) (step: npx eslint backend/src/api/routes/config/apply.ts)
🪛 GitHub Actions: Lint and Format Check / lint-and-format
[error] 38-38: Prettier formatting error from ESLint: Replace ⏎········.status(400)⏎········ with .status(400) prettier/prettier. Command: npx eslint backend/src/api/routes/config/apply.ts
🪛 GitHub Check: lint-and-format
[failure] 38-38:
Replace ⏎········.status(400)⏎········ with .status(400)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/src/api/routes/config/apply.ts` around lines 38 - 40, The chained
response call is misformatted for Prettier; replace the multi-line chain with a
single properly punctuated statement (e.g., return res.status(400).json({ error:
'config_validation', message: err.message, path: err.path });) so the
res.status(...).json(...) call is on one line with proper semicolon/return to
satisfy formatting/lint rules in the apply handler.
| it('applies modifications and returns applied=true', async () => { | ||
| const { setAuthSettings } = await import('../auth/settings.js'); | ||
| vi.clearAllMocks(); | ||
| await applyConfig({ | ||
| config: { auth: { additional_redirect_urls: ['http://b'] } }, | ||
| dry_run: false, | ||
| prune: false, | ||
| }); | ||
| expect(setAuthSettings).toHaveBeenCalledWith( | ||
| expect.objectContaining({ additionalRedirectUrls: ['http://b'] }), | ||
| ); | ||
| }); |
There was a problem hiding this comment.
Test title promises applied=true, but it isn’t asserted.
Line 33 says this case verifies applied=true, but the test only checks setAuthSettings. Please assert the returned flag explicitly.
Suggested fix
it('applies modifications and returns applied=true', async () => {
const { setAuthSettings } = await import('../auth/settings.js');
vi.clearAllMocks();
- await applyConfig({
+ const result = await applyConfig({
config: { auth: { additional_redirect_urls: ['http://b'] } },
dry_run: false,
prune: false,
});
+ expect(result.applied).toBe(true);
expect(setAuthSettings).toHaveBeenCalledWith(
expect.objectContaining({ additionalRedirectUrls: ['http://b'] }),
);
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('applies modifications and returns applied=true', async () => { | |
| const { setAuthSettings } = await import('../auth/settings.js'); | |
| vi.clearAllMocks(); | |
| await applyConfig({ | |
| config: { auth: { additional_redirect_urls: ['http://b'] } }, | |
| dry_run: false, | |
| prune: false, | |
| }); | |
| expect(setAuthSettings).toHaveBeenCalledWith( | |
| expect.objectContaining({ additionalRedirectUrls: ['http://b'] }), | |
| ); | |
| }); | |
| it('applies modifications and returns applied=true', async () => { | |
| const { setAuthSettings } = await import('../auth/settings.js'); | |
| vi.clearAllMocks(); | |
| const result = await applyConfig({ | |
| config: { auth: { additional_redirect_urls: ['http://b'] } }, | |
| dry_run: false, | |
| prune: false, | |
| }); | |
| expect(result.applied).toBe(true); | |
| expect(setAuthSettings).toHaveBeenCalledWith( | |
| expect.objectContaining({ additionalRedirectUrls: ['http://b'] }), | |
| ); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/src/services/config/apply.test.ts` around lines 33 - 44, The test
title promises applied=true but doesn't assert it; update the test that calls
applyConfig (in backend/src/services/config/apply.test.ts) to capture the return
value from applyConfig(...) and add an assertion that the returned object's
applied property is true (e.g., expect(result.applied).toBe(true)), keeping the
existing check that setAuthSettings was called with the expected
additionalRedirectUrls; target the applyConfig invocation and the
setAuthSettings mock in your change.
| if (input.dry_run || plan.changes.length === 0) { | ||
| return { plan, applied: false }; | ||
| } |
There was a problem hiding this comment.
applied can be true even when nothing was mutated.
Line 50 returns applied: true for plans that only contain remove changes with kept: true (prune disabled), but Lines 45-47 intentionally skip mutation. This makes the response misleading.
Suggested fix
export async function applyConfig(input: ApplyInput): Promise<ApplyResult> {
const file = validateConfig(input.config);
const live = await readLiveConfig();
const plan = diffConfig({ live, file, prune: input.prune });
+ const actionableChanges = plan.changes.filter(
+ (change) =>
+ !(
+ change.section === 'storage.buckets' &&
+ change.op === 'remove' &&
+ change.kept
+ )
+ );
- if (input.dry_run || plan.changes.length === 0) {
+ if (input.dry_run || actionableChanges.length === 0) {
return { plan, applied: false };
}
- const authChanges = plan.changes.filter((c) => c.section === 'auth');
+ const authChanges = actionableChanges.filter((c) => c.section === 'auth');
if (authChanges.length > 0 && file.auth) {
await setAuthSettings({
additionalRedirectUrls:
file.auth.additional_redirect_urls ?? live.auth?.additional_redirect_urls ?? [],
});
}
- for (const c of plan.changes) {
+ for (const c of actionableChanges) {Also applies to: 45-50
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/src/services/config/apply.ts` around lines 23 - 25, The response sets
applied: true even when no mutation happens (e.g., all plan.changes are remove
operations with kept === true and prune is disabled) — update the logic that
computes the returned applied flag to reflect whether any actual mutation
occurred: when preparing the result in apply (use the same scope where
input.dry_run, plan.changes and the final applied value are set) compute a
boolean like mutated = plan.changes.some(c => !(c.type === 'remove' && c.kept
=== true)) (and treat dry_run as always false for mutation detection), then
return { plan, applied: mutated } (preserving the existing early return for
input.dry_run or empty changes but ensure applied=false in those cases).
| // Apply auth section if any auth changes are present. | ||
| const authChanges = plan.changes.filter((c) => c.section === 'auth'); | ||
| if (authChanges.length > 0 && file.auth) { | ||
| await setAuthSettings({ | ||
| additionalRedirectUrls: | ||
| file.auth.additional_redirect_urls ?? live.auth?.additional_redirect_urls ?? [], | ||
| }); | ||
| } | ||
|
|
||
| // Apply storage.buckets section. | ||
| for (const c of plan.changes) { | ||
| if (c.section !== 'storage.buckets') { | ||
| continue; | ||
| } | ||
| if (c.op === 'add') { | ||
| await upsertBucket({ name: c.key, public: c.value.public ?? false }); | ||
| } else if (c.op === 'modify') { | ||
| await upsertBucket({ name: c.key, public: c.to }); | ||
| } else if (c.op === 'remove' && !c.kept) { | ||
| await deleteBucket(c.key); | ||
| } | ||
| } |
There was a problem hiding this comment.
Non-atomic apply can leave partial state on failure.
Line 30 can persist auth changes before a later bucket mutation fails on Lines 42-47, which leaves the system partially updated despite an error response. Please make the apply phase atomic (single DB transaction where possible) or add compensating rollback logic per applied step.
As per coding guidelines, "Implement proper database transaction handling".
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/src/services/config/apply.ts` around lines 27 - 48, The current apply
logic may persist auth via setAuthSettings() before bucket changes
(upsertBucket/deleteBucket) run, causing partial updates on failure; modify
apply to run all changes inside a single database transaction (or use an
existing transactional API) so setAuthSettings, upsertBucket and deleteBucket
are executed atomically, or if transactions are unavailable implement
compensating rollback that reverses setAuthSettings and any
upsertBucket/deleteBucket effects when a later step fails; update code paths
that call setAuthSettings, upsertBucket, and deleteBucket to accept/participate
in the transaction (or return undo information) and ensure plan.changes
iteration either runs inside that transaction scope or triggers rollback on
error.
| ## Concurrency | ||
|
|
||
| Each `apply` takes a Postgres advisory lock keyed on `(project, "config_apply")`. Two concurrent `apply` calls serialize; calls for different projects run in parallel. | ||
|
|
There was a problem hiding this comment.
Clarify single-tenant lock behavior to avoid overstating parallelism.
Line 70 currently implies cross-project parallel applies unconditionally, but current OSS behavior serializes all applies on a fixed project key unless req.projectId is present. Please make this conditional in the doc (e.g., “in multi-tenant/project-scoped deployments”).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/agent-docs/config-as-code.md` around lines 68 - 71, The docs overstate
concurrency: clarify that taking a Postgres advisory lock for each `apply` keyed
on `(project, "config_apply")` only enables cross-project parallel `apply` calls
in multi-tenant/project-scoped deployments; in single-tenant OSS installs we
serialize all applies on a fixed project key unless `req.projectId` is present.
Update the Concurrency section to make this conditional (e.g., “in
multi-tenant/project-scoped deployments (when `req.projectId` is set) calls for
different projects run in parallel; otherwise all applies are serialized on a
single project key”).
Companion to:
InsForge/CLI#100(CLI half — talks to the routes this PR adds)What
Adds the server-side half of
insforge.tomlconfig-as-code. Two new endpoints under/api/config, idempotent apply orchestration with a Postgres advisory lock, and an agent-facing doc.Server-side apply validates the entire desired state before any mutation, applies each section idempotently (auth via
setAuthSettings, buckets viaupsertBucket/deleteBucket), and is wrapped in apg_advisory_lockkeyed on(LOCK_NAMESPACE, hash(projectRef))so concurrent applies serialize per project.Why
See
InsForge/CLI#100for the full motivation. Short version: agents and humans review code through diffs, not API call logs. A TOML intent layer makes config changes reviewable, repo-clonable, andgit revert-able. DB stays source of truth; the TOML is intent.What's in this PR (backend half)
8 commits across 11 files:
New service module at
backend/src/services/config/:schema.ts— config JSON validator (mirrors CLI; intentional duplication for v1, hoist to@insforge/shared-schemaslater)read.ts— projects live state from existing services into the JSON shapediff.ts+ tests — server-side diff (mirror of CLI for sanity)lock.ts+ tests — Postgres advisory lock helperapply.ts+ tests — idempotent orchestrationNew routes at
backend/src/api/routes/config/:get.ts,apply.ts,index.routes.ts— mounted at/api/configinserver.tsService-layer additions (small, named):
services/auth/settings.ts— clean wrapper overAuthConfigServicefor stable seamsservices/storage/buckets.ts— addsupsertBucket(didn't exist; needed for idempotence)Agent doc:
docs/agent-docs/config-as-code.mdKey behaviors
--pruneopts in.ConfigValidationError→ 400 with structured message.env(NAME). Resolved server-side at apply time.MVP scope (deliberately narrow)
Wired in v1:
[auth] additional_redirect_urlsand[storage.buckets.<name>] public. Discovered during recon thatjwt_expiry/enable_signup/site_urldon't have backing columns inauth.config; shipping them would silently no-op and break idempotence (read returns undefined, file says X, plan shows infinite drift). Honest fix: shrink MVP, punt those three to a follow-up plan that adds columns first.OAuth providers, function metadata, SMTP, schedules, AI provider config, etc. — all are matrix-fill follow-ups. Architecture is identical for each; just plumbing.
Tests
lock.test.tsuses an in-memory fake Pool matching the existing repo convention (backend/tests/unit/payments-advisory-lock.test.ts). Real-DB integration test will come with a follow-up that adds shared pg test infra.Test plan
npx vitest run src/services/config/diff.test.ts src/services/config/lock.test.ts src/services/config/apply.test.ts— 14/14 greendocker composeinstance (requires CLI half deployed)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation