A FastAPI-based backend service for the LLM Visualizer React application.
This Python backend service provides API endpoints for processing and analyzing Large Language Model (LLM) related tasks, including:
- Tokenization of user prompts
- Embedding generation
- Data visualization support
- RESTful API endpoints using FastAPI
- Swagger UI documentation
- Token analysis
- Embedding generation
- Integration with LLM Visualizer frontend
- Install dependencies:
pip install fastapi uvicorn transformers torch- Run the server:
uvicorn main:app --reloadThe API documentation will be available at http://localhost:8000/docs
Detailed API documentation is available through the Swagger UI interface.
- Python 3.7+
- FastAPI
- Uvicorn
- Additional dependencies listed in requirements.txt