Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Tags: astronomer/agents

Tags

astro-airflow-mcp-0.8.2

Toggle astro-airflow-mcp-0.8.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
af: skip preamble lines when parsing astro CLI table output (#213)

## Summary
- When `ASTRO_API_TOKEN` is set in the env, astro CLI prepends `Using an
Astro API Token` to stdout (see `astro-cli` `cmd/cloud/setup.go:362`).
The af table parser was treating that line as the header row, collapsing
the deployment table to a single column called
`using_an_astro_api_token`, so every row was dropped silently and `af
instance discover` reported `No instances discovered` with no error.
- Walk `lines` from the top and pick the first line that
boundary-detects to 2+ columns as the real header. A real multi-column
header always has 2-space gaps and yields 2+ boundaries; preamble lines
have only single-space gaps and yield 1.
- Fix is generic: handles any future single-line preamble astro CLI may
add (warnings, deprecation notices, etc.) without needing to enumerate
them.
- `astro deployment list --json` would also fix this, but it was only
added in astro CLI v1.42.0 (#2063), so falling back across older
versions still requires this parser to be robust.

## Test plan
- [x] New unit test covers the `Using an Astro API Token` preamble case
- [x] New unit test covers preamble + no-results (`no Deployments found
in workspace X`) - returns `[]` cleanly
- [x] Existing 33 table-parsing tests still pass
- [x] Full `tests/test_astro_cli.py` + `tests/test_discovery.py` green
(76/76)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

astro-airflow-mcp-0.8.1

Toggle astro-airflow-mcp-0.8.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
af: strip query string from airflow_url so version detection works (#210

)

astro-airflow-mcp-0.8.0

Toggle astro-airflow-mcp-0.8.0's commit message
astro-airflow-mcp 0.8.0

Highlights:
- Layered config: global (~/.astro/config.yaml) + project-shared (.astro/config.yaml) + project-local (.astro/config.local.yaml), mirroring git config's system/global/local model
- Default global config path moved from ~/.af/config.yaml to ~/.astro/config.yaml (shared with astro-cli; legacy path honored as read-only fallback for one release)
- New 'af migrate' command: idempotent migration from legacy path with .bak preserved
- New 'af instance show' command: git config --show-origin-style display of where an instance is defined
- Discover Astro deployments with PAT auth (no per-deployment token minting)
- Skill update: don't pin Airflow Version on 'astro dev init'

astro-airflow-mcp-0.7.0

Toggle astro-airflow-mcp-0.7.0's commit message
astro-airflow-mcp 0.7.0

stable release of the PAT-auth rewrite.

af instance discover astro now reuses the user's astro login session
via Auth0 refresh-token exchange instead of minting permanent
DEPLOYMENT_ADMIN tokens. existing token / basic auth instances keep
working unchanged.

bundles the OSError fix from #205 so non-file ASTRO_HOME paths surface
clean errors instead of masked version-detect failures.

verified across full break battery + otto end-to-end + local-dev
end-to-end on top of 0.7.0a1 and 0.7.0a2.

astro-airflow-mcp-0.7.0a2

Toggle astro-airflow-mcp-0.7.0a2's commit message
astro-airflow-mcp 0.7.0a2

alpha 2: bundles the OSError fix from #205. _read_yaml now catches the
specific OSError shapes that mean 'no readable config' (FileNotFound,
NotADirectory, IsADirectory) so ASTRO_HOME=/dev/null and similar
sentinel paths surface clean 'run astro login' errors instead of
masked version-detect failures.

astro-airflow-mcp-0.7.0a1

Toggle astro-airflow-mcp-0.7.0a1's commit message
astro-airflow-mcp 0.7.0a1

alpha release with PAT-based auth for Astro deployments.

af instance discover astro now reuses the user's astro login session
instead of minting permanent DEPLOYMENT_ADMIN tokens. existing token /
basic auth instances keep working unchanged.

smoke testing in otto before promoting to 0.7.0 stable.

astro-airflow-mcp-0.6.4

Toggle astro-airflow-mcp-0.6.4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Fix list_dag_runs (MCP tool + af CLI) returning oldest runs first (#192)

The `list_dag_runs` MCP tool only accepted `dag_id` and called Airflow's
`GET /dags/{dag_id}/dagRuns` with no `order_by`. Airflow's default sort
on that endpoint is `id ASC`, so for any DAG with more than 100 runs the
tool returned the oldest 100 runs and the recent ones were unreachable
through the MCP — directly contradicting the docstring's promise of
"sorted by most recent".

The `af runs list` CLI hit the same root cause: `--order-by` defaulted
to `None`, so the API again returned oldest-first. Issue #168 reported
this from the CLI side.

## Changes

**MCP tool (`list_dag_runs`)**
- Exposes `limit`, `offset`, and `order_by` on the tool surface
- Defaults `order_by="-start_date"` so callers get newest-first

**CLI (`af runs list`)**
- Changes `--order-by` default from `None` to `-start_date`, matching
the MCP tool

## Why `-start_date`

`start_date` is the only sort field available on both Airflow 2.x and
3.x DAG-run list endpoints. `run_after` is Airflow 3 only;
`logical_date` doesn't exist in v2 (it's `execution_date` there).
Picking `-start_date` keeps a single default that works across both
adapters.

Callers who want a different order (e.g. `id` ascending to match the old
Airflow default, or filter by state) can still pass `--order-by`
explicitly on the CLI or `order_by=` on the MCP tool.

Closes #168

---------

Co-authored-by: Kaxil Naik <kaxilnaik@gmail.com>

astro-airflow-mcp-0.6.3

Toggle astro-airflow-mcp-0.6.3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Add Airflow 2.x plugin mode support for MCP server (#186)

## Summary

Extends the MCP server's plugin mode to work with Airflow 2.x, matching
the existing Airflow 3 plugin support. The MCP server can now run
embedded in an AF2 webserver process with endpoints at `/mcp/v1/`,
rather than only as a standalone sidecar.

## Design rationale

**Why a Flask blueprint?** AF2's webserver is Flask/WSGI, not
FastAPI/ASGI like AF3. Plugins register via `flask_blueprints` instead
of `fastapi_apps`. The blueprint lives in the same `plugin.py` as the
AF3 integration and only activates when FastAPI isn't importable.

**Why an ASGI bridge?** FastMCP is ASGI-native. Rather than reimplement
the MCP Streamable HTTP protocol in Flask, the plugin runs one asyncio
event loop in a daemon thread and submits each Flask request to it via
`run_coroutine_threadsafe`. The FastMCP lifespan is started once on the
shared loop so the task group stays initialized.

**Why lazy init (not at plugin load)?** Gunicorn forks worker processes
and threads don't survive the fork. The event loop + lifespan start on
the first request in each worker, guarded by a threading lock.

**Why a module-level dict for auth (not ContextVars)?** ContextVars
don't propagate across the thread boundary from the gunicorn worker into
the background asyncio loop. A plain dict works because gunicorn sync
workers handle one request at a time per worker. Both bearer tokens
(Astro) and basic auth (local) are captured from the incoming request
and read by the adapter when it makes internal calls.

**Why lazy `_get_plugin_url()`?** On Astro, `webserver.base_url` is
populated by the runtime *after* plugin import. The deployment path
prefix (e.g. `/d99lgbz8`) is required for internal calls to localhost,
so the URL is constructed on first request rather than at module load.

**Why exempt the blueprint from CSRF?** MCP clients use bearer/basic
auth, not session cookies. Flask-WTF's CSRF check would reject every
POST otherwise. Handled in `@bp.record_once`.

## Usage

Install into an Airflow 2.x environment using the `plugin-v2` extra. The
plugin auto-registers. Connect from an MCP client at
`https://<airflow>/mcp/v1/` with an `Authorization` header matching the
webserver's auth backend (basic auth locally, bearer token on Astro).

## Tested

- Local Docker (Airflow 2.11.0, basic auth): `get_airflow_version` and
`list_dags` returned 75 DAGs with auth correctly forwarded
- Astro stage (Airflow 2.11.2, deployment JWT): 20/20 requests
succeeded, tool calls return real data (`example_astronauts`)
- Unit tests: 437 pass (10 plugin tests, up from 7)

## Gotchas

- On Astro dev deployments, the plugin is loaded per-worker. During
rolling restarts there's a brief window where some requests may hit
workers that haven't loaded the plugin yet; they'll see 404s until the
rollout settles.
- The FastMCP lifespan warms on first request, which adds ~100ms of
latency to the first MCP call per worker process.

astro-airflow-mcp-0.6.2

Toggle astro-airflow-mcp-0.6.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Fix Airflow MCP plugin mode for remote deployments (#183)

Plugin mode had three issues preventing it from working on Astro
deployments:

- **Multi-replica session errors**: FastMCP's streamable HTTP transport
stored sessions in-memory, so requests load-balanced across multiple API
server replicas got "Session not found" errors. Fixed with
`stateless_http=True`.
- **Wrong internal URL**: The plugin never called `configure()`, so the
adapter defaulted to `localhost:8080`. On Astro the API server runs on
port 9091. Fixed by reading `[api] port` from Airflow config.
- **No auth on internal API calls**: Airflow's API requires JWT auth
even on localhost. Fixed by adding ASGI middleware that extracts the
`Authorization` header from incoming MCP requests and forwards it to
internal API calls via a per-request `ContextVar` (safe for concurrent
async requests).

## Design rationale

**Why `stateless_http=True`?** The MCP spec (2025-03-26) says sessions
are optional ("a server MAY assign a session ID"). Stateless mode means
every POST is independent — no in-memory session store, works with any
number of replicas without session affinity.

**Why forward the client's token instead of generating one internally?**
The MCP client already authenticates with a valid Airflow/Astro token.
Forwarding it to localhost API calls is equivalent to the user making
those calls directly — no privilege escalation, no credential
management, no `/auth/token` round-trips.

**Why `ContextVar` instead of setting `_manager._auth_token` directly?**
With concurrent async requests, a shared attribute would race — Alice's
token could be overwritten by Bob's before Alice's tool call reads it.
`ContextVar` is scoped per-async-task (per-request in ASGI).

**Why pure ASGI middleware instead of `BaseHTTPMiddleware`?**
`BaseHTTPMiddleware` runs `call_next` in a separate task, which breaks
`ContextVar` propagation. A pure ASGI middleware class runs in the same
task as the downstream handler.

## Tested on

- Local Astro CLI (Airflow 3.2, single container, no auth required)
- Remote Astro staging deployment (2 API server replicas, JWT auth
required)
- Cursor as MCP client with `url`-based config

astro-airflow-mcp-0.6.1

Toggle astro-airflow-mcp-0.6.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Fix env var auth when config file has credentials for a different ins…

…tance (#181)

## Summary

Fixes a bug where `af` CLI commands return 403 when using
`AIRFLOW_API_URL` to point at a local Airflow instance while
`~/.af/config.yaml` has credentials for a different (e.g., cloud)
instance.

## Root cause

The env var precedence logic in `CLIContext.init()` resolves each field
independently:

```
URL:      AIRFLOW_API_URL env var → http://localhost:8080  ✓
Token:    no AIRFLOW_AUTH_TOKEN   → config_values.token    ✗ (cloud token!)
Username: AIRFLOW_USERNAME       → admin                  ✓
Password: AIRFLOW_PASSWORD       → admin                  ✓
```

Since `auth_token` takes precedence over `username/password` in
`AdapterManager.configure()`, the cloud token is used against localhost
— 403.

## Fix

When `AIRFLOW_API_URL` is set via env var (overriding the config's URL),
don't inherit auth fields from the config file since they belong to a
different instance. Auth from env vars still works.

## Testing

```bash
# Before: 403 Forbidden on API calls
AIRFLOW_API_URL=http://localhost:8080 AIRFLOW_USERNAME=admin AIRFLOW_PASSWORD=admin af health

# After: works correctly, TokenManager exchanges credentials for JWT
AIRFLOW_API_URL=http://localhost:8080 AIRFLOW_USERNAME=admin AIRFLOW_PASSWORD=admin af health
```

All 433 unit tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Morty Proxy This is a proxified and sanitized view of the page, visit original site.