Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

We are building Ask Astro in the open as a reference implementation in order to drive a conversation. High-quality orchestration, ingest, monitoring and feedback pipelines are essential to operationalising LLM applications. We want to hear from you.

  • How is the Ask Astro demo working for you?
  • What are you building with LLMs and Apache Airflow?
  • How can RAG-based ingest pipelines be advanced for more scale (larger data, more sources, etc.)?
  • Which frameworks, libraries, vector stores are you using and should there be an Airflow provider?
  • How can Apache Airflow adapt to make LLM operations simpler, more observable, or scalable.
You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant
Morty Proxy This is a proxified and sanitized view of the page, visit original site.