Tracking interruption point during large language model output streaming using FastAPI StreamingResponse #13707
-
First Check
Commit to Help
Example Codefrom fastapi import FastAPI, Request
from starlette.responses import StreamingResponse
import asyncio
app = FastAPI()
@app.get("/stream")
async def stream(request: Request):
async def event_generator():
try:
for i in range(100): # Simulating a long output from a large model
yield f"data: Line {i}\n\n"
except Exception as e:
print(f"Error occurred: {e}")
return StreamingResponse(event_generator(), media_type="text/event-stream") DescriptionDescriptionI'm using FastAPI's Scenario
Operating SystemLinux Operating System DetailsNo response FastAPI Version0.115.7 Pydantic Version2.10.6 Python VersionPython 3.10.14 Additional ContextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment · 6 replies
-
Not sure it's the easiest way, but the following code works:
Here, before returning When client disconnects, |
Beta Was this translation helpful? Give feedback.
Not sure it's the easiest way, but the following code works: