Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Notifications You must be signed in to change notification settings

robertanto/Local-LLM-UI

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploy a chatbot with Huggingface Inference API

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Technological stack

As shown in the figure below the following frameworks have been used in this project:

  • Langchain
  • Huggingface API
  • FastAPI
  • Gradio

How to use

  1. Clone the repository.
git clone https://github.com/robertanto/local-chatbot-ui.git
cd local-chatbot-ui
  1. Create a Huggingface API token as shown here and insert it in the docker-compose.yaml file.

  2. Run the containers

docker compose up -d

You can interact with the chatbot at http://localhost:7860/.

About

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
Morty Proxy This is a proxified and sanitized view of the page, visit original site.