Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Latest commit

 

History

History
History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

README.md

Outline

LLM Visualizer Backend

A FastAPI-based backend service for the LLM Visualizer React application.

Overview

This Python backend service provides API endpoints for processing and analyzing Large Language Model (LLM) related tasks, including:

  • Tokenization of user prompts
  • Embedding generation
  • Data visualization support

Features

  • RESTful API endpoints using FastAPI
  • Swagger UI documentation
  • Token analysis
  • Embedding generation
  • Integration with LLM Visualizer frontend

Setup

  1. Install dependencies:
pip install fastapi uvicorn transformers torch
  1. Run the server:
uvicorn main:app --reload

The API documentation will be available at http://localhost:8000/docs

API Usage

Detailed API documentation is available through the Swagger UI interface.

Requirements

  • Python 3.7+
  • FastAPI
  • Uvicorn
  • Additional dependencies listed in requirements.txt
Morty Proxy This is a proxified and sanitized view of the page, visit original site.