Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

All the examples show how to set up the server by creating a config.py file and setting the openAI token through an environment variable (see for instance https://www.guardrailsai.com/docs/getting_started/guardrails_server). Although it is kind of clear how I could manage with other providers, it is not clear how to do so with a self hosted instance. The thing is that with OpenAI you do not specify the api base url, as the OpenAI endpoint is picket up automatically because the OpenAI env var was set up, so it's not clear how to do so generally.

The following is what I have locally that want to replicate from the server side:

import os
from guardrails import Guard
from guardrails.hub import ProfanityFree # Just an example validator

guard = Guard().use(ProfanityFree()) 

result = guard(
    messages = [{"role":"user", "content": "Hello"}],
    model = "hosted_vllm/<vllm-model-name>", # This is hosted with vllm  
    api_base = "http://localhost:<PORT>/v1" # Hosted locally or portforwarded
)

print(f"{result.validated_output}")

In short, how would I set the api base from the guardrails server so that LitteLLM picks it up?

You must be logged in to vote

Replies: 1 comment

Comment options

@grudloffev exeactly same doubt. also just why even we need a server? thats my quetion

cant we directly use this gurdrails as a normal function??

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
🙏
Q&A
Labels
None yet
2 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.